mirror of https://github.com/Nezreka/SoulSync.git
dev
main
fix/disable-beatport-features
johnbaumb-discover-redesign
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.1
2.2
2.3
2.4.0
2.4.1
2.4.2
2.5.0
2.5.1
v0.65
${ noResults }
949 Commits (main)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
b9feed1a67 |
Add min delay between slskd searches (Bell Canada anti-abuse fix)
- new soulseek.search_min_delay_seconds knob forces a gap between consecutive searches; smooths the burst pattern that trips ISP anti-abuse (Reddit report: Bell Canada cuts the WAN after rapid peer-connection spikes) even when the existing 35/220 sliding-window cap isn't hit - throttle math lifted to a pure compute_search_wait_seconds helper so the gate logic is testable independent of asyncio.sleep + the singleton client - new field on settings → connections → soulseek; default 0 = disabled so existing users see no change 15 helper-boundary tests pin defaults / no-throttle, sliding-window cap (legacy), min-delay (the new burst-smoother), max-of-both gates, and defensive paths. |
23 hours ago |
|
|
6233860d66 |
Fix Copy Debug Info music_source + surface missing services
- music_source / spotify_connected / spotify_rate_limited were reading a non-existent 'spotify' key on _status_cache and silently falling through to the missing-value default (always 'unknown' / False). Routed through the canonical accessors get_primary_source + get_spotify_status now. - added hydrabase_connected, youtube_available, hifi_instance_count, and always_available_metadata_sources so the debug dump reflects the full service surface - removed a local re-import of get_spotify_status that was making python 3.12 treat the name as function-scoped, breaking the new lambda above it (NameError on free variable) — module-level import already exists 11 endpoint-level tests pin music_source / spotify_* / hydrabase_* / youtube_available / always_available_metadata_sources / hifi_instance_count and the defensive fall-through paths when each lookup raises. |
1 day ago |
|
|
4892baf8d4 |
Skip already-owned tracks during download discography
- new track_already_owned helper wraps db.check_track_exists at the same confidence threshold the discography backfill repair job uses (0.7) — name+artist+album, format-agnostic so blasphemy-mode libraries (flac → mp3 + delete original) match correctly - endpoint runs the check after the artist + content-type filters and before add_to_wishlist, so a second discography click on the same artist no longer re-queues every track that already downloaded - per-album response carries a new tracks_skipped_owned counter alongside the existing artist/content/wishlist skip categories Discord report (Skowl). |
1 day ago |
|
|
d4ad5bf57f |
Filter cross-artist + content-type tracks during download discography
- drop tracks where the requested artist isn't named in track.artists (keeps features, drops compilation / appears_on contamination) - honor watchlist.global_include_live/remixes/acoustic/instrumentals the same way the discography backfill repair job already does - surface per-album skip counts in the ndjson stream (artist mismatch + content filter) so the ui can show what was filtered Closes #559. |
1 day ago |
|
|
7e70bbaaed |
Repair worker: log debug instead of bare pass on active-server lookup
Ruff S110 (try-except-pass) on the lookup inside `_build_unresolvable_album_folder_error`. Swallowed exception is benign (some test stubs don't expose `get_active_media_server` and we fall back to 'unknown'), but ruff is right that bare pass is a smell. Logger is the existing repair_worker logger, so this matches the same "debug-log on optional-input failure" pattern used in `core/library/path_resolver.py:_collect_base_dirs`. |
1 day ago |
|
|
56ae10693b |
Album Completeness: surface diagnostic when resolver can't find album folder
GitHub issue #558: clicking Auto-Fill / Fix Selected on the Album Completeness findings page returned a flat "Could not determine album folder from existing tracks" error with no diagnostic. Reporter is on Navidrome on Docker — the path resolver in `core/library/path_resolver.py` couldn't find any of the album's tracks on disk because Navidrome's Subsonic API doesn't expose filesystem library paths the way Plex's API does (probed in #476). Default settings → `library.music_paths` empty → no base directories to probe → silent None. User had no signal about what to configure. Not a regression of #476 — that fix targeted Plex auto-discovery and worked correctly for it. Navidrome was never covered because the protocol gives the resolver nothing to probe. Fix scoped to the diagnostic surface, not auto-magic discovery: - Added `resolve_library_file_path_with_diagnostic` returning `(resolved, ResolveAttempt)`. ResolveAttempt records what the resolver tried — `raw_path_existed`, `base_dirs_tried`, `had_config_manager`, `had_plex_client`. Pure data, no rendering opinions. - Legacy `resolve_library_file_path` becomes a thin wrapper that drops the attempt; every existing call site is unchanged. - `RepairWorker._fix_incomplete_album` now uses the diagnostic helper and renders a multi-part error via `_build_unresolvable_album_folder_error`: names the active media server, shows one sample DB-recorded path, lists every base directory the resolver actually probed, and points the user at Settings → Library → Music Paths as the actionable fix. - Distinguishes empty-base-dirs vs tried-and-failed cases so the user knows whether to add a mount or fix the existing one. - No auto-probing of common Docker conventions (`/music`, `/media`, etc). Speculative — could resolve to wrong dirs on the suffix-walk if a conventional path happens to contain a partial collision. User stays in control. 12 new tests: - 7 in `tests/library/test_path_resolver.py`: tuple-shape contract, raw-path-existed short-circuit, base-dirs listed even on walk failure, had-flags reflect caller inputs, no-base-dirs returns None with empty attempt, legacy `resolve_library_file_path` delegates correctly across happy / suffix-walk / failure paths. - 8 in `tests/test_repair_worker_unresolvable_folder_error.py`: active server name in error, sample DB path verbatim, base dirs listed, empty-base-dirs phrased differently, Settings hint always present, defensive against None attempt / missing sample / missing config_manager. Full pytest sweep: 2774 passed. |
1 day ago |
|
|
3af2d34cee |
Auto-import: fall through to other metadata sources when primary returns no match
Discord report: 16 Bandcamp indie albums sat in staging because auto-import couldn't identify them, but the manual search bar at the bottom of the Import Music tab found the same albums fine. Trace: `_search_metadata_source` only queried `get_primary_source()` — single source, no fallback. Meanwhile `search_import_albums` (manual search bar) already iterated `get_source_priority(get_primary_source())` and broke on the first source with results. Asymmetric behavior, same album: manual worked, auto-import didn't. Fix: lift `_search_metadata_source` to use the same source-chain pattern. Try primary first; if it returns nothing OR scores below the 0.4 threshold, fall through to the next source in priority order. First source producing a strong-enough match wins. Result dict carries the `source` that actually matched (not the primary name) so downstream `_match_tracks` calls the right client. Defensive per-source try/except so a rate-limited or auth-failed source doesn't abort the chain. Unconfigured sources (client=None) silently skipped. Cin-shape lift: scoring math extracted to pure `_score_album_search_result` helper so the weight tweaks (album 50% / artist 20% / track-count 30%) are pinned at the function boundary, independent of the orchestrator (per-source iteration, exception containment, threshold check). Weight constants exposed at module level (`_ALBUM_NAME_WEIGHT`, `_ARTIST_NAME_WEIGHT`, `_TRACK_COUNT_WEIGHT`) — greppable, bumpable in one place. Pre-extraction these were magic numbers inline. 27 new tests: - 9 integration tests in `test_auto_import_multi_source_fallback.py`: primary-success path unchanged (no fallback fires, only primary client called), primary-empty falls through, primary-weak-score falls through, first fallback success stops the chain (no wasted API calls on remaining sources), all-sources-fail returns None, per-source exception contained, unconfigured-source skipped, result `source` field reflects winning source, `identification_confidence` from winning source. - 18 helper tests in `test_album_search_scoring.py`: weights sum to 1.0, album weight dominant (invariant pin), perfect-match returns 1.0, per-component contribution (album / artist / track-count), Bandcamp vs streaming track-count mismatch (7-files vs 4-tracks case still scores ~0.87 above threshold), zero-track-count and zero-file guards, huge-mismatch non-negative guard, list-of-strings artist shape, missing `.name` / `.artists` / `None` total_tracks edge cases. Backwards compatible: single-source users see no change — chain just has one entry. Existing test `test_search_metadata_source_extracts_artist_id_from_dict_artist` needed one extra patch line for `get_source_priority`. Full pytest sweep: 2754 passed. |
1 day ago |
|
|
d5de724f9b |
Multi-artist Deezer upgrade + double-append guard hardening
Two follow-ups to the multi-artist tag settings PR:
1. Deezer contributors upgrade — closes the "known limitation"
flagged in the prior commit. Deezer's `/search` endpoint only
returns the primary artist for each track; the full contributors
array (feat., remix collaborators, producers credited as artists)
lives on `/track/<id>` and gets parsed by `_build_enhanced_track`.
Without the upgrade Deezer-sourced tracks never got multi-artist
tags even with the right settings on.
Fix in `core/metadata/source.py`: when source==deezer AND the
search response had a single artist AND a track_id is available,
fetch full track details via `get_deezer_client().get_track_details`
and replace `all_artists` with the upgraded list.
- One extra API call per affected Deezer track
- Skipped when search already returned multiple (no-op fast path)
- Skipped for non-Deezer sources (Spotify/Tidal/iTunes search
responses already include all artists)
- Skipped when no track_id is available
- Defensive try/except: on /track/<id> failure (network error,
deezer client unavailable), fall through to the search-result
list — never lose the data we already had
2. Double-append guard hardened with a word-boundary regex.
Prior commit checked for `"feat." not in title.lower() and "(ft."
not in title.lower()` — too narrow. Source platforms produce
wildly different feat-marker conventions: "(feat. X)", "(Feat X)",
"(FEAT X)", "(Featuring X)", "[feat. X]", "ft. X" (no parens),
"FT. X", etc. Any of these as the SOURCE title would cause a
double-append: `"Track (Feat X) (feat. Y)"`.
Replaced with `re.search(r'\b(?:feat|feat\.|featuring|ft|ft\.)\b',
title, IGNORECASE)`. Word-boundary regex catches every common
variant. Substring matches like "Aftermath" containing `ft`
correctly fall through to the append path (pinned by a regression
test).
16 new tests (29 total in the file):
- 9 parametrized variants of the double-append guard
- 1 substring guard ("Aftermath")
- 6 Deezer upgrade scenarios (fires when expected, doesn't fire
for non-Deezer / multi-artist search / no track_id, defensive
fall-through on failure, no false-positive when /track/<id>
confirms single artist)
Full pytest 2727 passed.
|
2 days ago |
|
|
c11a5b7eab |
Multi-artist tag settings: implement artist_separator + feat_in_title + populate _artists_list
Three settings on Settings → Metadata → Tags were partially or
completely unimplemented. Reporter (Netti93) traced each one.
(1) `write_multi_artist` only "worked" because of a never-populated
`_artists_list` field. `core/metadata/source.py` built
`metadata["artist"]` as a hardcoded ", "-joined string but never
assigned `metadata["_artists_list"]`. `core/metadata/enrichment.py`
line 107 reads that field and gates the multi-value tag write
on `len(_artists_list) > 1` — always saw an empty list, silently
no-op'd the write.
(2) `artist_separator` (default ", ") was referenced in the UI +
settings.js save path but ZERO Python code read the value. Every
multi-artist track ended up with hardcoded ", " regardless of
what the user picked.
(3) `feat_in_title` (when true: pull featured artists into the title
as " (feat. X, Y)" and leave only primary in the ARTIST tag —
Picard convention) had no implementation at all.
Fix in source.py:
* Populate `_artists_list` from the search response's artists array
* Read `feat_in_title` and `artist_separator` configs
* When `feat_in_title=True` and >1 artist: ARTIST = primary only,
append "(feat. X, Y)" to title with double-append guard
* Else: ARTIST = artists joined with `artist_separator`
* Single-artist case unaffected by either setting
Double-append guard uses a word-boundary regex catching all common
"feat" variants source platforms produce — `feat`, `feat.`,
`featuring`, `ft`, `ft.` — case-insensitive. Substring matches
(e.g. "Aftermath" containing "ft") correctly fall through to the
append path.
Fix in enrichment.py ID3 branch:
* TPE1 stays as the display string (with separator or primary-only
per the user's settings)
* Multi-value list goes to a separate `TXXX:Artists` frame (Picard
convention) when `write_multi_artist` is on
* Pre-fix the ID3 path wrote TPE1 twice — single-string then list
— and the second `add` overwrote the first, clobbering both the
configured separator AND the feat_in_title semantics. Vorbis path
was already correct (separate "artist" + "artists" keys).
Known limitation (flagged in WHATS_NEW): Deezer's `/search` endpoint
only returns the primary artist. The full contributors array lives
on `/track/<id>`. Enrichment uses search-result data so Deezer-
sourced tracks may still get only the primary artist until a follow-
up commit wires the per-track contributors fetch into the enrichment
flow. Spotify, Tidal, and iTunes search responses include all
artists so they work now.
23 new tests in `tests/metadata/test_multi_artist_tag_settings.py`:
* `_artists_list` populated for multi/single/no-artist cases
* `artist_separator` drives ARTIST string (default ", " + custom
";" + custom "; " + " & ")
* Single-artist case unaffected by either setting
* `feat_in_title=True` pulls featured to title, leaves primary in
ARTIST
* `feat_in_title` no-op for single artist
* Double-append guard recognizes 9 source-title variants ("(feat.
X)", "(Feat. X)", "(FEAT X)", "(feat X)", "(Featuring X)",
"[feat. X]", "ft. X", "(ft X)", "FT. X")
* Substring guard test pins "Aftermath" doesn't false-positive
* Combined-settings precedence: feat_in_title wins ARTIST string
but `_artists_list` carries everyone for multi-value tag
Full pytest 2711 passed.
|
2 days ago |
|
|
fc573a5f19 |
AudioDB worker: stop infinite loop on direct-ID lookup failure (#553)
Track enrichment was stuck in a constant retry loop. Logs showed
nothing but `Read timed out. (read timeout=10)` from
`lookup_track_by_id` repeating against the same track ID. AudioDB
itself was being hammered nonstop with no progress.
Cause: when an entity already has `audiodb_id` populated (from a
manual match or earlier scan) but `audiodb_match_status` is still
NULL — an inconsistent state some import paths can leave behind —
the worker tries a direct ID lookup. If that lookup fails (returns
None on timeout, which AudioDB's `track.php` endpoint hits
frequently because it's slow), the prior code logged "preserving
manual match" and returned WITHOUT marking status. Row stayed NULL
→ queue's NULL-status filter picked it up next tick → tried direct
lookup → timed out → returned → infinite loop.
The "preserve manual match" intent was correct: don't fall through
to the name-search path because that could overwrite a manually-set
`audiodb_id` with a wrong guess. Bug was the missing `_mark_status`
call before the early return.
Fix:
* `_process_item` direct-lookup-failure branch now calls
`_mark_status(item_type, item_id, 'error')` before returning. The
existing `audiodb_id` is preserved (column not touched). Queue's
NULL-status filter no longer re-picks the row.
* `_get_next_item` retry-cutoff queue priorities (4/5/6) extended
from `audiodb_match_status = 'not_found'` to
`audiodb_match_status IN ('not_found', 'error')`. Same `retry_days`
window. Transient AudioDB outages still recover automatically;
permanently-broken IDs eventually get re-attempted once a month
rather than staying errored forever.
5 new tests in `tests/test_audiodb_worker_stuck_track.py` use a real
SQLite DB (not mocks) so the SQL queries are actually exercised:
- lookup-returns-None marks status='error' (no infinite loop)
- lookup-raises-exception marks status='error' (defensive)
- lookup-success preserves the existing match-success path
- error-status row past retry-cutoff gets picked up again
- error-status row within cutoff stays skipped (loop prevention
works)
Only triggers for entities in the inconsistent `audiodb_id` set +
`match_status` NULL state. Happy path and already-matched /
already-not-found rows unchanged. Full pytest 2698 passed.
Closes #553.
|
2 days ago |
|
|
e3a4b513fd
|
Merge pull request #538 from kettui/fix/repair-worker-server-source
Preserve server source during album fill |
2 days ago |
|
|
4fb9f38798 |
Your Albums: selectable wishlist modal + Tidal album resolution
Two-part fix to the Your Albums "Download Missing" flow on Discover. Part A — UX redesign The prior `downloadMissingYourAlbums()` ran a per-album loop that fired direct-download tasks via `openDownloadMissingModalForYouTube`. Reported as silently failing — "Queuing 2/2" toast with no actual transfer activity. Even when downloads worked, bypassing the wishlist meant no retry / dedup / rate-limit / source-fallback handling. Replaced with a selectable-grid modal mirroring the Download Discography pattern from the library page. Click the download button → opens a checkbox grid showing every missing album (cover, title, artist, year, track count, source) → user picks what they actually want → click "Add to Wishlist" → each album's tracks get resolved + queued through the existing wishlist auto-download processor. NDJSON progress stream renders ✓/✗ per album. New JS helpers: - `_openYourAlbumsBatchModal(missingAlbums)` — builds the modal - `_renderYourAlbumsBatchCard(row, index)` — per-album card - `_yourAlbumsBatchSelectAll(select)` — bulk toggle - `_updateYourAlbumsBatchFooterCount()` — live count + button text - `_closeYourAlbumsBatchModal()` — overlay teardown - `_startYourAlbumsBatchAddToWishlist()` — submit handler, NDJSON progress consumer - `_yourAlbumsPickSource(album)` — picks the single best source-id per row (priority: spotify → deezer → tidal → discogs) Reuses the `.discog-*` CSS classes from the library Download Discography modal — no new CSS. Reuses the existing `/api/artist/<id>/download-discography` endpoint. The endpoint's URL artist_id param is functionally unused (per-album payload carries everything — verified by reading the endpoint body), so the modal posts with placeholder `your-albums` and gets multi-artist resolution for free without backend changes. Part B — Tidal album resolution Reported as the original bug: clicking download on Tidal-only albums did nothing because `/api/discover/album/<source>/<album_id>` had no `tidal` branch and `tidal_client` had no `get_album_tracks` method. `core/tidal_client.py`: new `get_album_tracks(album_id, limit=None)` method. Two-phase: cursor-walk `/v2/albums/<id>/relationships/items?include=items` for track refs + position metadata (`meta.trackNumber` + `meta.volumeNumber`), batch-hydrate via existing `_get_tracks_batch` for artist/album names. Returns `Track` objects with `track_number` and `disc_number` attached. Sort by (disc, track) so multi-disc compilations render in album order. `web_server.py`: new `'tidal'` source branch in `/api/discover/album/<source>/<album_id>`. Resolves album metadata via `get_album`, tracks via `get_album_tracks`, cover art via inline `?include=coverArt` lookup. Same response shape as Spotify/Deezer branches. `webui/static/discover.js`: - `tidal_album_id` added to `trySources` for the single-album click flow (`openYourAlbumDownload`) - Same source picker drives the new batch modal - Virtual-id generation includes `tidal_album_id` so Tidal-only albums get stable identifiers across discover-album-* / your- albums-* contexts 10 new tests in `tests/test_tidal_album_tracks.py` pin: - Single-page walk + hydration - Multi-page cursor chain - Multi-disc sort order (disc 1 → 2 in track order each) - `limit` short-circuit at page boundary - No-token short-circuit (no API call) - HTTP error returns empty - 429 raises (propagates to `rate_limited` decorator for retry) - Forward-compat type filter (skips non-track entries) - Partial-batch hydration failure containment - Empty-album short-circuit (no batch call) Full pytest: 2693 passed. |
2 days ago |
|
|
7a23d60f28 |
AcoustID scanner: file-tag fallback for legacy compilation tracks
Follow-up to the prior compilation-album scanner fix. That patch
made the scanner read `tracks.track_artist` (per-track artist
column) via COALESCE so compilation tracks would compare against
the right value. But tracks downloaded BEFORE the `track_artist`
column existed have track_artist=NULL — COALESCE falls back to
album artist (the curator) and the wrong-comparison case returns.
Fix: explicit 3-tier resolution in `_scan_file`:
1. DB `tracks.track_artist` if populated → trust it. Respects
manual edits from the enhanced library view (user who curated
the DB value but didn't re-tag the file gets their edit
respected, not overridden by stale file tag).
2. File's ARTIST tag via mutagen if present → use it. Tidal /
Spotify / Deezer all write the per-track artist into the
audio file at download time regardless of SoulSync's DB
schema, so it's ground truth even when the DB column is
stale or NULL. File is already open for fingerprinting so
mutagen tag-read is essentially free.
3. Album artist → final fallback for files without proper ARTIST
tags AND no DB track_artist. Existing pre-fix behavior.
`_load_db_tracks` SELECT now surfaces `track_artist` (raw, may be
empty/NULL via NULLIF) and `album_artist` separately in addition
to the COALESCE'd `artist` field — so `_scan_file` can tell the
difference between 'DB has a curated value' and 'DB fell back to
album artist'. Without this distinction, the file-tag fallback
would create false positives when DB is curated but file is stale.
5 new tests (11 total in the file) pin:
- File-tag-trumps-DB resolves the legacy NULL case (DB says
'Andromedik' (album curator), file says 'Eclypse', AcoustID
says 'Eclypse' → no flag)
- Tag-missing falls back to album artist (preserves existing
genuine-mismatch contract — file without tag + AcoustID
mismatch still flags)
- Mutagen exception swallowed (debug log, fall-through)
- File-tag matches DB → no behavioral change
- DB curated value trumps stale file tag (false-positive guard
— user edited DB without re-tagging file shouldn't get flagged)
Two existing test fixtures (`_make_context` callers) updated to
the new 10-column row shape.
SQL behavior verified empirically against real SQLite: NULL and
empty-string both flow through NULLIF → None in Python →
file-tag-fallback path. Modern populated values trump file tag.
|
2 days ago |
|
|
f4c433c151 |
Tidal: rewire favorite albums + artists to V2 user-collection endpoints
Discord: Discover → Your Albums (and Your Artists) was returning nothing for Tidal users regardless of how many albums/artists they'd favorited. Audit found `get_favorite_albums` and `get_favorite_artists` called the deprecated `/v2/favorites?filter[type]=ALBUMS|ARTISTS` endpoint — that endpoint returns 404 for personal favorites because it's scoped to collections the third-party app created itself. The V1 fallback (`/v1/users/<id>/favorites/...`) is also dead because modern OAuth tokens carry `collection.read` instead of the legacy `r_usr` scope V1 demands (returns 403). Same root cause as the favorited-tracks fix from #502. Fix: rewire to the working V2 user-collection endpoints — `/v2/userCollectionAlbums/me/relationships/items` and `/v2/userCollectionArtists/me/relationships/items` — using the same cursor-paginated pattern shipped for tracks. Architecture: * ID enumeration lifted into a generic `_iter_collection_resource_ids(path, expected_type, max_ids)` helper so tracks / albums / artists all share one walker. Three thin wrappers preserve the per-resource public surface (`_iter_collection_track_ids`, `_iter_collection_album_ids`, `_iter_collection_artist_ids`). Net deduped ~80 lines that would otherwise be three near-identical copies. * Batch hydration via `/v2/{albums|artists}?filter[id]=...&include=...` with extended JSON:API include semantics. One request returns up to 20 albums + their artists + cover artworks all in `included[]` (or 20 artists + their profile artworks). Three static helpers parse the response: - `_build_included_maps(included)` → indexes the array by type so per-resource lookup is O(1) per relationship ref - `_first_artist_name(rels, artists_map)` → resolves primary artist from relationships block; '' on missing/unknown - `_first_artwork_url(rel, artworks_map)` → picks `files[0]` (Tidal returns artwork files largest-first, so this gets the highest-resolution variant — typically 1280×1280) * Public methods (`get_favorite_albums`, `get_favorite_artists`) preserve the prior return shape — list of dicts matching what `database.upsert_liked_album` / `upsert_liked_artist` consume — so the discover aggregator path in `web_server.py` stays byte-identical. No caller changes needed. * Deleted ~240 lines of dead code: the V2-favorites paths AND the V1 fallback paths from the old method bodies. Both are dead against modern OAuth tokens. 24 new tests in `tests/test_tidal_favorite_albums_artists.py` pin: * Cursor-walker dispatch (album/artist iters pass correct path + expected_type to the generic walker) * Included-map building (groups by type, skips items missing id) * Artist + artwork relationship resolution (full + missing rels + unknown id + no files cases) * Batch hydration parse for albums (full attributes, missing relationships fall through to defaults, type-filter excludes non-album entries, `filter[id]` param is comma-joined) * Batch hydration parse for artists (same shape coverage) * End-to-end orchestrator behavior (walk → batch → return, empty-input short-circuits without API call, BATCH_SIZE chunking on 41 IDs → 20/20/1, exception-from-iter returns []) Endpoint paths empirically verified against live Tidal API: `userCollectionArtists/me/relationships/items` returned 200 + 5 real artist refs for the test account. `userCollectionAlbums/...` returned 200 + empty (account has 0 album favorites currently) but the response shape is correct. The deprecated `/v2/favorites?filter[type]=ALBUMS` returned 404. The V1 `/v1/users/<id>/favorites/albums` returned 403 with explicit "Token is missing required scope. Required scopes: r_usr" message. WHATS_NEW entry under existing '2.5.1' block. Full pytest: 2678 passed. |
2 days ago |
|
|
6fe85f2f37 |
Server playlist sync: append mode (preserve user-added tracks)
Discord report (CJFC, 2026-04-26): syncing a Spotify playlist to the
server overwrote anything manually added to the server-side playlist.
The fix adds a per-sync mode picker next to the Sync button on the
playlist details modal — Replace (default, current delete-recreate
behavior) or Append only (preserves existing tracks, only adds new
ones). Useful when the source platform caps playlist size and the
user is manually building beyond it on the server.
Implementation:
* New `append_to_playlist(name, tracks)` method on Plex / Jellyfin /
Navidrome clients. Each uses the server's NATIVE append API:
- Plex: `existing_playlist.addItems(new_tracks)`
- Jellyfin: `POST /Playlists/<id>/Items?Ids=...&UserId=...`
- Navidrome: Subsonic `updatePlaylist?songIdToAdd=...`
Falls back to `create_playlist` when the playlist doesn't exist
yet (first sync). No delete-recreate, no backup playlist created
(preserves playlist creation date + metadata + non-soulsync-managed
tracks).
* Dedup-by-server-native-id (ratingKey for Plex, GUID for Jellyfin,
song-id for Navidrome) — never re-adds a track already on the
playlist. Server-native identity, not fuzzy title+artist match,
so it can't false-collide.
* `sync_service.sync_playlist` accepts `sync_mode='replace'|'append'`
kwarg. Single if/else branch dispatches to `append_to_playlist` or
`update_playlist`. Threaded through `core/discovery/sync.run_sync_task`
and the `/api/sync/start` HTTP handler. Validation on the API rejects
unknown mode strings (defaults to 'replace').
* Frontend: per-playlist `<select id="sync-mode-${id}">` rendered next
to the Sync button in both modal renderers (sync-spotify.js for
Spotify playlists, sync-services.js for Deezer ARL playlists).
`startPlaylistSync` reads the select at click time; missing select
(other callers like discover.js) defaults to 'replace' so backward
compat preserved without per-call-site updates.
* SoulSync standalone has no playlist methods at all and the modal
hides the Sync button entirely on it via `_isSoulsyncStandalone` —
dispatch never reaches that path, no defensive fallback needed.
15 new tests pin per-server append behavior:
- missing playlist → create_playlist delegation
- dedup filtering (existing IDs skipped, only new tracks added)
- empty new-track set short-circuits without API call
- failure paths return False without raising
- contract listing (KNOWN_PER_SERVER_METHODS includes
'append_to_playlist'; Plex / Jellyfin / Navidrome all implement)
Plus tests/discovery/test_discovery_sync.py fake `sync_playlist`
fixture got `sync_mode='replace'` default to match the new signature
(was breaking after the kwarg add; now passing).
WHATS_NEW entry under new '2.6.0' block (hidden by
`_getLatestWhatsNewVersion` until next release bump).
Closes CJFC discord request.
|
3 days ago |
|
|
1d6e213b16 |
version bump
|
3 days ago |
|
|
f28f9808db |
Tidal: surface Favorite Tracks as virtual playlist (issue #502)
Adds the user's Tidal favorited tracks ("My Collection" in the Tidal
app) as a virtual playlist alongside their real playlists, mirroring
how Spotify's "Liked Songs" is treated.
Reporter (yug1900) located the working endpoint after the prior
`/v2/favorites?filter[type]=TRACKS` attempt returned empty data —
that endpoint is scoped to collections the third-party app created
itself, not personal favorites. Real endpoint:
GET /v2/userCollectionTracks/me/relationships/items
?countryCode=US&locale=en-US&include=items
Cursor-paginated (20 per page, follow `links.next` with
`page[cursor]=...` until exhausted). Response only carries
track-level attributes — artist + album NAMES come back as
relationship-link stubs, not embedded data.
Implementation:
* Two-phase fetch — `_iter_collection_track_ids` walks the cursor
chain to enumerate every track id (cheap, IDs only), then
`get_collection_tracks` batch-hydrates 20 IDs at a time through
the existing `_get_tracks_batch` helper which already knows how
to `include=artists,albums`. No duplication of the JSON:API
artist/album parse, no new dataclass shape.
* Virtual playlist `tidal-favorites` appended to the end of
`/api/tidal/playlists`. ID intentionally has no colon —
sync-services.js renderer interpolates IDs into CSS selectors
via template literals (`#tidal-card-${p.id} .foo`) and a `:`
would parse as a CSS pseudo-class operator.
* `tidal_client.get_playlist("tidal-favorites")` recognizes the
virtual id and dispatches to the collection path internally, so
every per-id consumer gets it for free: detail endpoint, mirror
auto-refresh automation, "build Spotify discovery from Tidal
playlist" flow.
OAuth scope expansion:
* Added `collection.read` to both OAuth flows (the
`core/tidal_client.py::authenticate` standalone path AND the
`web_server.py::auth_tidal` web flow — they were independent
scope strings that both needed updating).
* Added `prompt=consent` to both flows — without it Tidal silently
returns a token carrying only the ORIGINAL scope set even after
re-authentication, because Tidal treats the existing
authorization as still valid.
* New `disconnect()` method + `POST /api/tidal/disconnect`
endpoint + Disconnect button next to Authenticate in Settings →
Connections → Tidal — required for users whose existing token
predates the scope expansion (forces a clean grant).
Reconnect-needed UI hint:
* `_collection_needs_reconnect` flag set on 401/403 from the
collection endpoint, cleared on next successful walk, NOT set
on 5xx (transient server errors must not falsely tell the user
to reconnect).
* Listing endpoint reads the flag and surfaces a placeholder card
titled "Favorite Tracks (reconnect Tidal to enable)" with a
description pointing at Settings, so the user has something
visible to act on instead of a silently missing row.
Diagnostic logging — collection request URL + response status +
first 300 bytes of body now logged at info level so future "why
is my collection empty" reports can be diagnosed from app.log
without needing live reproduction.
22 new tests pin: cursor walk (full chain, max-ids cap mid-page +
at page boundary), auth gates (no token / 401 / 403 all bail
clean), reconnect-flag lifecycle (set on 401/403, cleared on next
successful walk, NOT set on 5xx), forward-compat type filter
(non-track entries skipped), count helper, batch hydration
delegation + chunking at the 20-per-batch cap, partial-batch
failure containment, virtual-id dispatch (real playlist ids still
flow through the normal path).
Closes #502.
|
3 days ago |
|
|
b5b6673216 |
Reorganize: hint at Unknown Artist Fixer for placeholder-metadata rows
Phase B of foxxify discord report. Pre-#524 manual-import bug left
some albums in the library with `artist=Unknown Artist` and `album.title
= <numeric album_id>`. Reorganize couldn't place them (no usable
metadata source ID) and emitted a generic "run enrichment first" hint
that doesn't apply — enrichment can't fix these rows. The right tool
is the existing `Fix Unknown Artists` repair job (reads file tags,
re-resolves metadata, re-tags + moves files).
Discoverability gap, not a logic gap. Reorganize now detects the bad-
metadata shape (Unknown Artist OR album.title that's a 6+ digit
numeric id) and emits a clear "run the Fix Unknown Artists repair
job" hint at both reason-emit sites (planner + executor). No
duplication of fixer logic.
WHATS_NEW entry covers both Phase A (orphan-format sibling handling,
already committed in
|
3 days ago |
|
|
d944a166f8 |
Reorganize: move orphan-format siblings alongside the canonical
Discord report (Foxxify): users with the lossy-copy feature enabled have `track.flac` AND `track.opus` side-by-side in their library. Reorganize is DB-driven and only knows about ONE file per track (the lossy copy). The other format used to get left behind in the old location while the canonical moved to its new destination. Empty-folder cleanup never fired because the source dir still had audio. # What was happening 1. User downloads album → SoulSync transcodes `.flac` → `.opus`, embeds `.lrc` lyrics 2. DB row points at `.opus` (the lossy library copy) 3. User runs Library Reorganize 4. Reorganize moves `.opus` to new template path → `Artist/Album/01 Track.opus` 5. `.flac` orphan stays at old location, `.lrc` follows `.opus` 6. Source dir still has the `.flac` → cleanup skips → empty folders pile up # Fix `_finalize_track` now finds sibling-stem audio files at the source BEFORE removing the canonical and moves them to the same destination dir, preserving both formats with the canonical's renamed stem. Two new helpers in `core/library_reorganize.py`: - `_find_sibling_audio_files(audio_path) -> list[str]` — returns paths to other audio files at the same directory that share the canonical's filename stem. Excludes the canonical itself, non- audio extensions (sidecars handled separately by `_delete_track_sidecars`), and different-stem tracks (different songs in the same dir). - `_move_sibling_to_destination(sibling_src, canonical_dst) -> str` — moves a sibling-format file to the canonical's destination dir with the canonical's renamed stem + the sibling's original extension. Defensive — OS errors logged at warning, return None, doesn't raise (caller treats as best-effort). After the fix: 1. `.opus` → moved to new dir 2. `.flac` sibling detected → moved to same new dir with same stem 3. Source `.opus` removed, `.lrc` sidecar deleted from source 4. Source dir empty → cleanup proceeds normally 5. Both formats end up paired at the new location # Tests added (11) `tests/test_reorganize_orphan_format_handling.py`: - Sibling detection: finds `.flac` when `.opus` is canonical (and symmetric direction), excludes canonical itself, excludes different-stem tracks, excludes non-audio (`.lrc`/`.nfo`), finds multiple siblings (3+ formats), returns empty when source dir missing - Sibling move: renames to canonical stem + preserves sibling extension, creates destination dir if missing, no-op when source already at destination, returns None on OS failure (caller treats as best-effort) # Verification - 11/11 new tests pass - 97/97 reorganize-related tests pass total (no regression in existing helpers) - Ruff clean # Follow-up in same PR Next commit: cleanup repair job for legacy "Unknown Artist / album_id" rows from the pre-#524 manual-import bug. Reorganize correctly leaves those alone (they're DB-broken, not file-broken), but a separate maintenance job to find + re-enrich them is needed. |
3 days ago |
|
|
812db1fbbf |
AcoustID scanner: prefer track_artist for compilation albums
Discord report (Skowl): downloaded a compilation album ("High Tea
Music: Vol 1") where every track has a different artist (Eclypse,
Andromedik, T & Sugah, Gourski, etc.) and the AcoustID scanner
flagged every single track as Wrong Song. The file tags had the
correct per-track artist (e.g. "Eclypse" for "City Lights"), but
the scanner compared against the album-level artist ("Andromedik",
the curator). Raw similarity 12% → Wrong Song flag.
# Why the prior multi-value fix didn't help
Foxxify's case (just-merged PR): AcoustID returned multi-value
credit "Okayracer, aldrch & poptropicaslutz!" — primary IS in the
credit. Splitting found it.
Skowl's case: both sides single-value but DIFFERENT artists.
Splitter has nothing to find — Eclypse simply isn't in "Andromedik".
Different bug.
# Cause
Scanner SQL at `core/repair_jobs/acoustid_scanner.py:281` joined
the `artists` table via `tracks.artist_id` which points at the
ALBUM artist (the curator/label-name applied to every row in a
compilation). The `tracks.track_artist` column already holds the
correct per-track artist for compilations — populated by every
server-scan path (Plex `originalTitle`, Jellyfin `ArtistItems`,
Navidrome per-track `artist`) AND the auto-import / direct-download
post-process flow (`record_soulsync_library_entry` writes it when
different from album artist). Scanner just wasn't reading it.
# Fix
```sql
SELECT t.id, t.title,
COALESCE(NULLIF(t.track_artist, ''), ar.name) AS artist,
...
```
Prefers per-track artist when populated, falls back to album artist
for legacy rows / single-artist albums where `track_artist` is NULL.
`NULLIF(t.track_artist, '')` handles the empty-string-instead-of-null
case some legacy rows might have.
# Composes with Foxxify's multi-value fix
For the rare compilation track where AcoustID ALSO returns a
multi-value credit (e.g. compilation track has multiple credited
performers), both paths work together — `track_artist` gives the
correct expected primary, then the helper splits the credit and
finds it.
# Tests added (2)
- `test_load_db_tracks_prefers_track_artist_for_compilation` —
reporter's exact case: track with `track_artist='Eclypse'` AND
`artist_id` pointing at album artist 'Andromedik' resolves to
'Eclypse'. Second track with NULL `track_artist` falls back to
album artist 'Andromedik' (single-artist + legacy compat).
- `test_load_db_tracks_falls_back_when_track_artist_empty_string`
— empty string in `track_artist` (some legacy rows) → NULLIF
returns NULL → COALESCE falls back to album artist.
Both use a real SQLite DB so the COALESCE/NULLIF logic + JOIN
runs against actual schema (SimpleNamespace fakes can't simulate
JOINs).
# Verification
- 6/6 scanner tests pass (2 new + 4 existing)
- 2586 full suite passes (+2 from prior commit)
- Ruff clean
|
3 days ago |
|
|
df304eb016 |
AcoustID scanner: handle multi-value artist credits
Discord report (Foxxify): the AcoustID scanner repair job flagged
multi-artist tracks as Wrong Song because AcoustID returns the
FULL credit ("Okayracer, aldrch & poptropicaslutz!") while the
library DB carries only the primary artist ("Okayracer"). Raw
SequenceMatcher similarity scored ~43% — well below the 60%
threshold — so the scanner created a finding even though the
audio was correct. User couldn't fix without lowering the global
artist threshold to ~30% (which would let real mismatches through).
# Fix
Extended the shared `core/matching/artist_aliases.py::artist_names_match`
helper (originally lifted for #441) with credit-token splitting.
When the actual artist string contains common separators —
- punctuation: `,` `&` `;` `/` `+`
- keywords (whitespace-bounded): `feat.` `ft.` `featuring` `with`
`vs.` `x`
— the helper splits into individual contributors and checks each
against the expected artist. Primary-in-credit cases now resolve
at 100% instead of 43%.
Two pattern groups because punctuation separators don't need
surrounding whitespace, but keyword separators MUST be
whitespace-bounded — otherwise we'd split artists with `x` /
`with` etc. in their names ("JAY-X" → "JAY-" / "" issue).
Composes with the existing alias path: cross-script multi-artist
credits ("Hiroyuki Sawano" expected, "澤野弘之, FeaturedJp"
actual) work via alias-token-against-credit-token compare.
# Wire-in
Scanner at `core/repair_jobs/acoustid_scanner.py:202` replaces
the raw `SequenceMatcher` call with `artist_names_match`. Pass
RAW artist strings (not pre-normalised by `_normalize`) so the
splitter can recognise separators — `_normalize` strips ALL
punctuation, which destroyed the very tokens the splitter needs.
The AcoustID post-download verifier (`core/acoustid_verification.py`)
already routes through `_alias_aware_artist_sim` which calls the
same helper — gets the multi-value benefit automatically without
a separate wire-in.
# New `split_artist_credit` exported helper
Pure-function helper for callers who want token-level access to
the credit list (debugging, UI, future per-token enrichment). Same
splitter logic, exposed as a top-level function.
# Tests added (14)
`tests/matching/test_artist_aliases.py` (+11):
- `TestSplitArtistCredit` — parametrised across 12 credit-string
formats (comma, ampersand, semicolon, slash, plus, feat./ft./
featuring, with, vs., x, single-token, empty), drops empty
tokens, strips per-token whitespace
- `TestMultiValueCreditMatching` — reporter's exact case
(Okayracer in 3-artist credit → 100%), primary in middle/end of
credit, genuine-mismatch still fails, single-token actual falls
through to direct compare, multi-value composes with aliases,
threshold still respected
`tests/test_acoustid_scanner.py` (+3):
- Reporter's case end-to-end through `_scan_file` — fingerprint
99% / title 100% / multi-artist credit → no finding created
- Genuine artist mismatch still creates finding (no false
suppression of real mismatches)
- `JobResultStub` minimal scaffold for the integration tests
# Verification
- 14 new tests pass (49 helper + 5 scanner total in their files)
- 110 matching + scanner tests pass total
- 2584 full suite passes (+25 from baseline 2559)
- Ruff clean
- Reporter's exact case (Okayracer in `Okayracer, aldrch &
poptropicaslutz!`) now scores 100% match → no Wrong Song flag
|
3 days ago |
|
|
8a4c0dc92a |
Deezer cover-art download: fallback to original URL on CDN refusal
Defensive followup. If Deezer CDN ever refuses the upgraded 1900×1900 URL for a specific album (rare — empirically tested 4 albums and none hit it), pre-fix would have succeeded with the 1000×1000 URL and post-fix would have failed entirely. Both download sites now retry with the original URL when the upgraded URL fails: - `core/metadata/artwork.py::download_cover_art` — auto post-process flow. Resolves the original URL from album_info / context the same way the existing path does. - `core/tag_writer.py::download_cover_art` — captures the original URL before upgrade so the retry has it without a second context lookup. Strictly non-regressive: worst plausible post-fix case is now identical to pre-fix (cover at 1000×1000 succeeds). Fallback only fires on the rare CDN-refusal edge. Tests added (2): - `test_tag_writer_retries_with_original_on_failure` — upgraded URL raises, original succeeds, both attempts logged in call order - `test_tag_writer_no_fallback_for_non_dzcdn_url` — non-Deezer URLs go through unchanged, no fallback path triggered (single attempt) Verification: - 18/18 helper + integration tests pass - 2561 full suite passes - Ruff clean |
3 days ago |
|
|
80cf16339c |
Deezer cover art: upgrade CDN URL to 1900×1900 (was embedding 1000×1000)
Discord report (Tim): downloaded cover art via Deezer metadata
source came out visibly blurry in Navidrome / on phones — large
displays exposed the limited resolution.
# Cause
Deezer's API returns `cover_xl` URLs at 1000×1000. The underlying
CDN actually serves up to 1900×1900 by rewriting the size segment
in the URL path (same trick the iTunes mzstatic + Spotify scdn
upgrades already use). SoulSync wasn't doing the rewrite — every
Deezer-sourced cover got embedded at 1000×1000 regardless of how
much higher resolution the CDN had available.
# Verified empirically
```
$ for size in 1000 1400 1800 1900 2000; do curl -I "...{size}x{size}-..."; done
1000: 200 OK 106 KB
1400: 200 OK 198 KB
1800: 200 OK 331 KB
1900: 200 OK 371 KB
2000: 403 Forbidden
```
1900 is the safe ceiling. Above that the CDN returns 403. CDN
serves source-native bytes when source < target (smaller-source
albums get same bytes whether we ask for 1000 or 1900), so asking
for 1900 universally is safe.
# Fix
New `_upgrade_deezer_cover_url(url, target_size=1900)` helper in
`core/deezer_client.py`. Pure function, mirrors the
`_upgrade_spotify_image_url` pattern that already lives in
`core/spotify_client.py`. Defensive on every input shape:
- Empty / None → returned as-is
- Non-Deezer URL (no `dzcdn`) → returned as-is
- No size segment in URL → returned as-is
- Already at/above target → returned as-is (idempotent, never
downgrades)
Applied at both cover-download sites:
- `core/metadata/artwork.py::download_cover_art` — auto post-process
flow. Mirrors the existing iTunes mzstatic upgrade right above it.
- `core/tag_writer.py::download_cover_art` — enhanced library view's
"Write Tags to File" feature.
# Scope discipline
- Helper applied at the DOWNLOAD boundary, not the source extraction
point in `deezer_client.py`. Means cached entries in the metadata
cache + DB row `image_url` columns keep the original 1000×1000 URL
Deezer's API returned. Future CDN behavior changes only affect the
download path, not stored data.
- Pre-existing `prefer_caa_art` toggle (Settings → Library →
Post-Processing) untouched — orthogonal workaround for users who
want even higher quality (MusicBrainz Cover Art Archive, often
3000×3000+).
- iTunes / Spotify upgrade paths untouched — they already worked.
# Tests added (16)
`tests/metadata/test_deezer_cover_url_upgrade.py`:
- Standard upgrade: default target 1900 on cover URL, alternate
dzcdn host (`e-cdns-images.dzcdn.net` vs `cdn-images.dzcdn.net`),
artist picture URLs (same path pattern), 500×500 source upgrades
too
- Custom target size: smaller target = no-op (never downgrade),
larger target works
- Idempotent: already at/above target returned unchanged
- Defensive on non-Deezer URLs: parametrised across 5 hosts
(Spotify scdn, iTunes mzstatic, MB CAA, Last.fm, random) — all
returned untouched
- Defensive on malformed Deezer URL (no size segment) → returned
as-is
- Empty / None handling
# Verification
- 16/16 helper tests pass
- 560/560 metadata + imports tests pass (no regression)
- 2559 full suite passes
- Ruff clean
|
3 days ago |
|
|
bc34d39ce9 |
Tighten alias-lookup trust + add ambiguity gate + diagnostic log
Cin pre-review pass on the false-positive risk. Three tightenings:
# 1. Bumped MB-search trust threshold from 0.6 → 0.85
`MusicBrainzService.lookup_artist_aliases` previously trusted any
MB search match scoring ≥ 0.6 combined (name-similarity + MB
relevance). For distinctive cross-script artists the user-reported
case targets (Hiroyuki Sawano, Сергей Лазарев, etc.) real matches
score ~1.0 — well above 0.85. The 0.6 floor was loose enough to
let in moderate matches for ambiguous names, risking aliases for
the wrong artist getting cached + applied.
Bumped to 0.85. Tighter without rejecting any of the legit
cross-script cases the PR is for.
# 2. Ambiguity gate — skip when results within 0.1 of best
When MB search returns multiple results all scoring high (within
0.1 of the best), the artist name is ambiguous — common name with
multiple distinct artists ("John Smith" returning 10 different
John Smiths). Pulling aliases for any one of them risks the wrong
artist's data bridging incorrectly to a file's tag.
Added explicit ambiguity detection: when 2+ results within 0.1,
skip alias lookup entirely + cache empty. Matches Cin's
"explicit > implicit" — the prior code just picked the highest
score blindly.
# 3. Diagnostic log when alias rescues a comparison
When the alias path triggers a PASS that direct similarity would
have FAILed, emit an INFO log: `Artist alias rescued comparison:
expected='X' vs actual='Y' (direct sim=0.00, alias 'Z' →
score=1.00)`.
Lets future bug reports trace which alias triggered which decision.
Doesn't change behavior — visibility only. Logs ONLY the rescue
case, not happy-path direct matches (no log spam).
# Tests added (5)
`test_artist_alias_service.py` (+3):
- `test_moderate_confidence_match_now_skipped_strict_threshold`
- `test_ambiguous_results_skipped`
- `test_unambiguous_high_confidence_match_succeeds`
`test_acoustid_verification_aliases.py` (+3):
- `test_alias_rescue_emits_info_log` — direct-fail + alias-pass
emits INFO log
- `test_no_log_when_direct_match_succeeds` — happy path quiet
- `test_no_log_when_alias_doesnt_help` — failed path also quiet
# Test infrastructure note
Logging tests use a directly-attached `ListHandler` on
`soulsync.acoustid.verification` (the actual logger name —
dot-separated by `get_logger`), NOT pytest's caplog. Same pattern
as the prior watchdog-test fix — caplog is intermittently flaky
in full-suite runs for soulsync namespace loggers. An owned
handler sidesteps both issues.
# Verification
- 85/85 matching tests pass (+5 from prior commit)
- 2543 full suite passes (+6 from prior, +85 PR-total)
- Ruff clean
- Reporter's Japanese + Russian regression tests still pass —
legit cross-script case (sim ≈ 1.0) clears the new 0.85
threshold easily
|
3 days ago |
|
|
11397307b2 |
Alias resolution polish: lazy-fire on direct-match failure + worker backfill
Two perf gaps that would have failed Cin's review: # Gap #1: alias lookup fired unconditionally Pre-fix in this commit, `_resolve_expected_artist_aliases` ran at the top of every `verify_audio_file` call regardless of whether the direct artist match would have passed. For users whose library is mostly same-script (95% of cases), every successful verification was paying for a wasted DB query (and possibly a wasted MB API call for un-enriched artists). Restructured the helper to accept a callable provider instead of a pre-resolved list. Provider invoked LAZILY only when direct similarity falls below `ARTIST_MATCH_THRESHOLD`. Verifier passes a memoising thunk that resolves once across the 3 comparison sites within one verification. `_alias_aware_artist_sim` now accepts `aliases` as either: - iterable of strings (used eagerly — backward compat with tests that already know the aliases) - callable returning the iterable (resolved on first need within a verification) Happy path (direct match passes): zero DB queries, zero MB calls. Cross-script case: one resolution shared across 3 sites — same as the prior contract. # Gap #2: existing-MBID artists never got alias backfill Worker's `_process_item` artist branch had an `existing_id` short- circuit (line 296) that updated MBID status but skipped alias fetch. Result: every user with an already-enriched library had MBIDs but NULL aliases on day-one of this PR. Live MB lookup at verify-time covered them, but at the cost of N live calls for N artists across the library. Added one-time backfill: when existing-MBID is found AND `artists.aliases` for that row is empty, fetch + persist aliases. Subsequent re-scan cycles short-circuit on the populated column — no repeated MB calls. New helper `_artist_aliases_empty(artist_id)` does the cheap NULL check via direct SQL. Best-effort: defensively returns True on errors so backfill happens (a redundant MB call is cheaper than missing the backfill entirely). # Tests added (9) `test_acoustid_verification_aliases.py` (+6): - `TestLazyAliasResolution` (3): no lookup when direct match passes, lookup fires only when direct fails, lookup memoised across the 3 sites within one verification. - `TestAliasProviderCallable` (3): iterable passed directly, callable resolves lazily, callable returning empty falls back to direct sim. `test_artist_alias_service.py` (+3): - `test_existing_mbid_path_backfills_aliases_when_column_empty` - `test_existing_mbid_path_skips_backfill_when_aliases_already_set` - `test_existing_mbid_backfill_failure_does_not_break_match` # Verification - 79/79 matching tests pass (+9 from prior commit) - 2537 full suite passes (+9, +79 PR-total) - Ruff clean - Backward compat: every prior-commit test still passes (the iterable-shape API still works alongside the new callable shape) |
3 days ago |
|
|
7066233c37 |
Wire alias-aware artist match into AcoustID verifier — fixes #442
This is the user-visible commit. The reporter's exact two cases (Japanese kanji, Russian Cyrillic) now pass verification instead of being quarantined. # What changed Verifier's three artist-similarity sites now route through the shared `core.matching.artist_aliases.artist_names_match` helper instead of raw `_similarity`: - `_find_best_title_artist_match` (per-recording scoring at the best-match stage) - Secondary scan when title matches but best-match's artist doesn't (line ~355 pre-fix) - Final fallback scan over all recordings (line ~403 pre-fix) Aliases for the expected artist are resolved ONCE at the top of `verify_audio_file` via `_resolve_expected_artist_aliases`, which calls the new `MusicBrainzService.lookup_artist_aliases` chain (library DB → cache → live MB). Single resolution per verification regardless of how many AcoustID recordings come back — pinned by test. New helper `_alias_aware_artist_sim(expected, actual, aliases)` wraps the pure helper with the verifier's normaliser (`_similarity`) and threshold (`ARTIST_MATCH_THRESHOLD`). Returns a single float so existing threshold-comparison code paths keep their shape — minimal diff. # Reporter's cases — verified Case 1 (issue #442 verbatim): File: YAMANAIAME by 澤野弘之 Expected: YAMANAIAME by Hiroyuki Sawano Pre-fix: Quarantined (artist=0%) Post-fix: PASS (alias '澤野弘之' resolved from MB) Case 2 (issue #442 verbatim): File: On the Other Side by Sergey Lazarev Expected: On the other side by Сергей Лазарев Pre-fix: Quarantined (artist=7%) Post-fix: PASS (alias 'Sergey Lazarev' resolved from MB) Both reproduced as regression tests with stubbed MB service. # Backward compat Three test cases pin that no-aliases / failure paths preserve pre-fix behaviour exactly: - Clear artist mismatch (different artist, same script) still FAILs — aliases bridge synonyms, not unrelated artists. - Exact title + artist match still PASSes regardless of aliases. - MB service raise → verifier completes with direct similarity (treats failure as "no aliases available" — same as pre-fix). Also covers manual import: the import-modal "Search for Match" flow goes through the same verifier, so the reporter's complaint that "manual import simply throws them back in quarantine again" is fixed by the same change. # Tests added (11) `tests/matching/test_acoustid_verification_aliases.py`: - `_alias_aware_artist_sim`: alias bridges score ↑, no-aliases falls back, aliases don't mask genuine mismatches - `_find_best_title_artist_match` accepts + uses aliases - Reporter's case 1 (Japanese) end-to-end - Reporter's case 2 (Russian) end-to-end - Backward compat: no-aliases mismatch still fails, exact match still passes, MB-service-raise doesn't break verification - Performance: alias lookup fires ONCE per verification regardless of recording count # Verification - 11 new verifier tests pass - 31 prior service tests pass - 28 prior helper tests pass - 294 matching + imports tests pass total (no regression) - Ruff clean |
3 days ago |
|
|
15244f24cf |
Live MB lookup for un-enriched artists with cache
Previous commit only populated `artists.aliases` for artists the MB worker had enriched. But the AcoustID verifier (next commit) needs aliases for ANY expected artist — including: - Artists not yet in the user's library (first download) - Artists in the library where MB enrichment hasn't run yet - Artists where MB enrichment ran but found no MBID (NULL aliases) This commit adds a multi-tier resolution helper that fills those gaps without thrashing the MB API. # Multi-tier resolution `lookup_artist_aliases(artist_name) -> list[str]`: 1. **Library DB** (fast path): existing `get_artist_aliases` lookup by name. No network. Most common path once the worker has enriched everything. 2. **Cache** (existing `musicbrainz_cache` table, entity_type= `artist_aliases`): a prior live lookup for this name. Empty cache hit is respected (don't re-query when MB previously had nothing). 3. **Live MB**: search artist by name → pick highest-confidence match (combined name-similarity + MB relevance) → fetch aliases for that MBID → cache the result. Always returns a list (possibly empty), never raises. Empty result on any tier means "no alternate spellings found, fall back to direct match" — identical to the pre-fix behaviour. # Threshold gate Live lookup only trusts the MB search result when combined similarity score >= 0.6. Below that, we'd be guessing at the wrong artist — searching `John Smith` returns multiple John Smiths and pulling aliases for one of them could mismatch. Cache the empty result so we don't keep re-searching the same low-confidence name. # Performance contract Critical for the verifier path: 100 quarantine candidates with the same expected artist must NOT trigger 100 MB API calls. Cache hit on second + subsequent calls per unique artist name. Verified by test pinning the call counts. # Tests added (8) - Tier 1 library DB hit — no MB API call fired - Tier 3 live MB lookup → search → fetch → returns aliases - Tier 2 cache hit on second call — no re-query - Empty input → empty return + no API call - Network failure on search → empty + cached so we don't retry - No search results → empty + cached - Low-confidence match (sim < 0.6) skipped — defends against picking the wrong artist - Library row exists but aliases NULL → falls through to live lookup (defends against the half-enriched state) # Verification - 31/31 service tests pass (8 new + 23 prior) - Ruff clean |
3 days ago |
|
|
48d848bb74 |
MB worker populates artists.aliases on enrichment
Issue #442 — MusicBrainz exposes alternate-spelling aliases (Japanese kanji `澤野弘之` for `Hiroyuki Sawano`, Cyrillic `Сергей Лазарев` for `Sergey Lazarev`, etc.) on every artist record. SoulSync's MB enrichment worker had access to this data via `get_artist(mbid, includes=['aliases'])` but wasn't reading or persisting it. This commit wires the alias fetch into the worker's existing artist-match path, persists to the new `artists.aliases` column added in the prior commit, and adds a verifier-friendly read-by- name lookup so the AcoustID verifier (next commit) can resolve aliases without an MB round-trip when the artist is in the library. # New service methods - `fetch_artist_aliases(mbid) -> list[str]` — calls `mb_client.get_artist(mbid, includes=['aliases'])`, parses the alias array, dedupes case-insensitively. Returns empty list on any failure (missing key, network error, malformed response) so transient MB outages never trigger stricter quarantine decisions than the pre-fix behaviour. Empty mbid → no API call. - `update_artist_aliases(artist_id, aliases)` — persists as JSON array to `artists.aliases`. Idempotent — overwrites prior value. Empty list clears the column. None artist_id is a no-op. - `get_artist_aliases(artist_name) -> list[str]` — reads back by artist NAME (not id), case-insensitive. Used by the verifier where the expected artist comes from track metadata — there's no library row id at quarantine time. Returns empty list for unknown artists, missing data, or corrupt JSON (defensive against legacy rows). # Worker integration `MusicBrainzWorker._process_item` artist branch: - After `update_artist_mbid` succeeds, fetch aliases for the matched MBID and persist via `update_artist_aliases`. - Best-effort: alias fetch wrapped in try/except, failure logs at debug level, doesn't regress the match outcome. - No alias call when the artist didn't match an MBID (nothing to enrich). # Tests (23) - `fetch_artist_aliases`: extracts names from MB response, case-insensitive dedup, skips empty/null entries, missing-key fallback, network failure → empty, empty mbid no API call, verifies `inc=aliases` request param. - `update_artist_aliases`: persists as JSON, idempotent overwrite, empty list clears column, None id is no-op. - `get_artist_aliases`: returns aliases for known artist, case-insensitive lookup, empty for unknown artist / no-aliases row, handles corrupt JSON + non-list shape gracefully. - Worker integration: matched artist triggers fetch + persist, no alias call when not matched, alias-fetch failure doesn't break the match outcome. # Verification - 23/23 new tests pass - Ruff clean |
3 days ago |
|
|
235ada7e0f |
Add pure artist-name comparison helper with alias awareness
Issue #442 — files tagged with one spelling of an artist's name (Japanese kanji `澤野弘之`) get quarantined when SoulSync expects the romanized spelling (`Hiroyuki Sawano`). Raw similarity comparison scored 0% across scripts. MusicBrainz exposes alternate-spelling aliases on every artist record but the verifier never consulted them. This commit adds the pure helper that does the alias-aware comparison. No I/O, no DB access, no network. Caller supplies the aliases (looked up from library DB or live MB by later commits in this PR). Default threshold matches the verifier's existing `ARTIST_MATCH_THRESHOLD` (0.6) so wiring this in preserves current pass/fail semantics on the no-alias path. # API ``` artist_names_match(expected, actual, *, aliases=None, threshold=0.6, similarity=None) -> (matched: bool, best_score: float) ``` - Direct compare first (fast path + baseline score) - If below threshold, score each alias against `actual` - First alias to clear threshold → match - Returns the best score across all candidates so callers can log the score they made the decision on ``` best_alias_match(expected, actual, aliases=None, *, similarity=None) -> (winner: Optional[str], best_score: float) ``` Companion helper for callers that want to surface WHICH alias triggered the match (debug logs, UI explanations). No threshold — purely informative. # Architectural choices - **Pure function**: no I/O. Caller (verifier, future matching-engine consumers) owns alias lookup strategy + threshold tuning. - **Custom similarity callable**: lets the verifier pass its parenthetical-stripping normaliser without this module having to know about it. Defaults to lowercase + SequenceMatcher (matches the verifier's existing behaviour). - **Defensive coercion**: aliases input handles None entries, empty strings, non-string types, sets, tuples, lists — caller may feed raw MB response data without cleaning first. - **Backward compat**: `aliases=None` or empty → behaves identically to a plain similarity check. Paths not yet wired up to alias lookup see no behaviour change. # Tests (28) - Direct compare (no aliases): exact / case / whitespace / fuzzy / different - Cross-script with aliases: Japanese ↔ romanized (reporter's case 1), Cyrillic ↔ Latin (reporter's case 2), symmetric direction, no-match fallthrough so aliases don't mask genuine mismatches - Aliases input handling: None, empty, set, tuple, None-entries, non-string entries - Threshold: default matches verifier's 0.6, custom stricter, custom looser - Custom similarity: applies to both direct + alias compare - Best-alias-match introspection - Backward compat parametrised across 5 cases # What this commit does NOT do This is the helper module + tests only. Subsequent commits in this PR populate aliases (MB worker), provide live MB lookup with cache for un-enriched artists, and wire the helper into the AcoustID verifier where the quarantine decision actually fires. |
3 days ago |
|
|
c02d51d60d |
Plex: trigger_library_scan + is_library_scanning use auto-detected section — fixes #535
# Bug
Plex servers with the music library named anything other than "Music"
(Música, Musique, Musik, Musica, 音乐, موسيقى, etc.) hit this error
after every import cycle:
soulsync.plex_client - ERROR - Failed to trigger library scan
for 'Music': Invalid library section: Music
soulsync.web_scan_manager - ERROR - Failed to initiate PLEX
library scan via web
Side effect: `wishlist.processing` kept reporting "Missing from
media server after sync" for tracks that DID import correctly, so
they got perpetually re-added to the wishlist.
# Root cause
`_find_music_library` correctly auto-detects the music section by
`section.type == 'artist'` and stores it on `self.music_library` —
works for any locale because the type is language-neutral. Read
methods (`get_artists`, etc.) route through `_get_music_sections`
which returns `[self.music_library]`, so they never had the bug.
But `trigger_library_scan` and `is_library_scanning` ignored
`self.music_library` and called
`self.server.library.section(library_name)` directly with the
hardcoded `"Music"` default. `server.library.section('Music')`
raises `NotFound` on any server whose section isn't literally
named "Music".
# Fix
Both methods now prefer `self.music_library` first, fall back to
literal `library_name` lookup only when auto-detection hasn't
populated the cached reference (test fixtures, edge cases).
`is_library_scanning`'s activity-feed match also corrected to
filter by the resolved section's actual title — the prior code
matched `library_name.lower() in activity_title.lower()` which
defaults to "music" and would never match activities for
non-English sections.
`trigger_library_scan`'s success log line now surfaces the actual
section title (`Música`) instead of the unused `library_name`
default ("Music") — confusing when debugging on non-English servers.
# Tests added (13)
`tests/media_server/test_plex_non_english_section_name.py`:
- `test_uses_auto_detected_section_regardless_of_locale` — parametrised
across 6 locale variants (Música, Musique, Musik, Musica, 音乐, موسيقى).
Each verifies trigger_library_scan calls the auto-detected
section's `update()`, NOT a literal-name fallback. Stub raises
AssertionError on `server.library.section()` so a regression that
re-introduces the fallback fails loudly.
- `test_falls_back_to_literal_lookup_when_no_auto_detection` —
backward compat: music_library=None → literal lookup as before.
- `test_explicit_library_name_arg_used_only_when_no_auto_detection` —
auto-detected wins over explicit kwarg when both available.
- `test_logs_correct_section_label_on_success` — log line surfaces
resolved section title.
- 4 symmetric tests for is_library_scanning covering refreshing-attr
check, activity-feed title match, no-match for unrelated sections,
fallback path.
# Verification
- 13 new tests pass
- 84/84 media_server tests pass (no regression in the existing
Plex / Jellyfin / Navidrome suite)
- 2458 full suite passes (+13 from baseline)
- Ruff clean
|
3 days ago |
|
|
402d851cac |
Deezer search: drop advanced-syntax at endpoint, free-text + rerank wins
Live-API verification revealed advanced-syntax queries hurt more than they help on this endpoint. Switching the import-modal Deezer search back to free-text + local rerank. # What live testing showed Hit Deezer's public API with both query forms for the issue #534 case (`Dirty White Boy` + `Foreigner`): **Free-text (`q=Dirty White Boy Foreigner`):** - Returns 21 results - Real Foreigner Head Games studio cut at #1 - Live versions at #2-10 - Karaoke / cover variants at #11-15 **Advanced (`q=track:"Dirty White Boy" artist:"Foreigner"`):** - Returns 12 results - "(2008 Remaster)" at #1 — canonical Head Games cut MISSING from top 8 entirely - Live + alt-album versions follow Advanced syntax DOES filter karaoke at the API level (none in the 12-result set vs. 5 at positions 11-15 in free-text), but it has its own ranking bias that surfaces remasters / "Best Of" cuts ahead of the canonical recording. Net regression for the user- facing goal. # Fix 1. Endpoint reverts to free-text query with local rerank applied. 2. Local rerank gains "remaster" / "remastered" / "reissue" patterns under VARIANT_TAG_PATTERNS (soft 0.4× penalty — user may want them but they shouldn't outrank the original). 3. Client kwarg support (`track=` / `artist=` / `album=`) preserved for future opt-in callers (e.g. exact-match flows where API- level filtering matters more than ranking). # Verified end-to-end against live Deezer API Re-ran the exact #534 case through the live API + new rerank. Top 15 results post-rerank: 1. Dirty White Boy — Foreigner — Head Games ← REAL CUT AT TOP 2-10. Various Live versions 11-15. Karaoke / cover / tribute variants ← BURIED Real Foreigner Head Games studio cut at #1, exactly the user's ask. # Tests - `test_relevance.py` — variant tag patterns extended; existing tests still pass (50 tests). - `test_search_match_endpoints.py::test_joins_track_and_artist_into_free_text_query` — replaces `test_passes_track_and_artist_as_kwargs`; verifies endpoint sends free-text join, NOT field-scoped kwargs (the prior test asserted the wrong direction now). - Karaoke-burying assertion at the endpoint still pins the user-visible behaviour. - Client kwarg path tests untouched (still pin advanced-syntax construction for future opt-in callers). # Verification - 75 relevance + endpoint + query tests pass - 2445 full suite passes - Ruff clean - Live Deezer API shows real cut at #1 post-rerank |
3 days ago |
|
|
59992d42a8 |
Deezer search: free-text fallback when advanced query returns 0
Defensive followup to the relevance fix. Deezer's advanced search
syntax (`artist:"X"`) is documented as substring match, but in
practice it's brittle on artist name variants ("Foreigner [US]",
"The Foreigner") and on tracks indexed under non-canonical title
spellings. When the advanced query returns nothing, we'd previously
land at "No matches" — a regression vs. pre-fix behaviour where
free-text would have returned a less-relevant but non-empty set.
Fix: when the advanced query returns 0 results AND the caller used
field-scoped kwargs, fall back to a free-text join of the same
kwargs and re-query. Caller-side rerank still tightens whatever the
fallback returns, so the worst-case post-fix behaviour is the
pre-fix behaviour — never strictly worse.
Pulled the cache + parse + store dance into a private helper
(`_search_tracks_with_query`) so the orchestration can call it
twice (advanced → fallback) without code duplication. Single API
call when the advanced query has results — no wasted requests.
Diagnostic logger.debug fires when the fallback triggers so we can
see in production whether it's happening (and to which queries).
# Tests added (4)
- `test_falls_back_to_free_text_when_advanced_empty` — advanced
query returns 0, free-text returns hits; client returns the
free-text hits + both API calls fire.
- `test_no_fallback_when_advanced_query_has_results` — single hit
on advanced query → no second API call.
- `test_no_fallback_when_legacy_free_text_call` — legacy callers
already exhausted the only path; empty result is final.
- `test_no_fallback_when_query_unchanged` — empty kwargs path
doesn't trigger the fallback branch (used_advanced=False).
# Existing tests updated
The 4 prior `TestSearchTracksQueryWiring` + `TestSearchTracksCacheKey`
tests were stubbing `_api_get` to return empty `{'data': []}` and
asserting `assert_called_once`. With the new fallback, those stubs
trigger a second API call and the assertions break — even though
the FIRST call construction is what the tests cared about. Updated
the stubs to return one fake hit so the fallback doesn't fire, and
switched to `call_args_list[0]` for first-call inspection.
# Verification
- 18/18 deezer query tests pass (14 prior + 4 new)
- 2445 full suite passes (+4 from prior commit)
- Ruff clean
|
3 days ago |
|
|
8603cd6680
|
Preserve server source during album fill
- derive the destination server_source from the target album context - write it on copied rows and retarget moved rows too - cover the copy branch with a regression test |
3 days ago |
|
|
1cc37081a6 |
Fix Deezer search relevance — issue #534
# Background User reported (#534) that the import-modal "Search for Match" dialog returned irrelevant results when Deezer was the metadata source. Searching `Dirty White Boy` + `Foreigner` returned 5+ karaoke / "originally performed by" / "in the style of" / "re-recorded" / tribute-band results ranked above the actual Foreigner studio cut from Head Games. User had to scroll past the junk every time, or fall back to iTunes search which is much slower. # Root cause — two layers 1. **Endpoint joined `track + artist` into free-text query.** `/api/deezer/search_tracks` was passing `q=Dirty White Boy Foreigner` to Deezer's `/search/track` API. Deezer fuzzy-matches that string across title / lyrics / artist / album / contributors and orders by global popularity — anything that appears across many compilations outranks the canonical recording. 2. **No local rerank.** None of the search-modal endpoints applied any post-filtering. Deezer's API order shipped straight to the user. # Fix — same architectural shape Cin would build ## Layer 1: field-scoped query at the client boundary `core/deezer_client.py::search_tracks()` now accepts optional `track`, `artist`, `album` kwargs. When provided, builds Deezer's advanced search syntax: `q=track:"X" artist:"Y" album:"Z"`. Massive relevance improvement because each term matches the right field instead of fuzzy-matching everywhere. Backward compat preserved: legacy free-text `query=` callers still work unchanged. Field-scoped path takes precedence when both are provided. Empty input fast-fails without an API call. Embedded double-quotes stripped (Deezer's syntax has no escape mechanism). ## Layer 2: provider-neutral relevance reranker New `core/metadata/relevance.py` module — pure-function rerank over the canonical `Track` dataclass. Composable scoring: - **Cover/karaoke patterns** (multiplier 0.05, effectively buries): matches "karaoke", "originally performed by", "in the style of", "made famous by", "tribute", "vocal version", "backing track", "cover version", "re-recorded", "cover by", etc. across title, album, AND artist fields. Catches the screenshot's exact junk: artist credits like "Pop Music Workshop" / "The Karaoke Channel" / "Foreigner Tribute Band". - **Variant tags** (multiplier 0.4): live / acoustic / demo / instrumental / remix / radio edit / club mix etc. — softer penalty since the user MAY want them. Skipped entirely when the expected_title contains the same tag (so searching "Track (Live)" still ranks Live versions first). - **Exact artist boost** (multiplier 1.5): primary artist exactly matches expected_artist after normalisation. Single strongest signal for "this is the canonical recording". - **Title + artist similarity** via SequenceMatcher (parentheticals + punctuation stripped before comparison). - **Album-type weighting**: album=1.0 > single/ep=0.85 > compilation=0.7. Compilations are more likely tribute / karaoke repackages. Each component is a standalone function so tests pin them individually without standing up the full pipeline. ## Wired at three search-modal endpoints - `/api/deezer/search_tracks` — uses both layers (field-scoped query + rerank). - `/api/itunes/search_tracks` — uses rerank only (iTunes API has no advanced-syntax search, but karaoke / cover variants still leak through and need the local penalty). - `/api/spotify/search_tracks` — already builds field-scoped `track:X artist:Y` query; rerank added as the consistency safety net so all three sources behave the same from the user's perspective. Other Deezer call sites (matching engine, watchlist scanner, auto-import single-track ID) deliberately not touched in this PR — they have their own elaborate scoring pipelines tuned to their specific contexts and aren't surfacing the user-reported issue. Per Cin: "don't refactor beyond what the task requires." # Tests 71 new tests across 3 files: - `tests/metadata/test_relevance.py` (50 tests) — every scoring component pinned individually + the issue #534 screenshot reproduced as a regression test (real Foreigner cut wins after rerank, karaoke variants drop to bottom). - `tests/metadata/test_deezer_search_query.py` (14 tests) — advanced-syntax query construction, field-scoped wiring at the client boundary, free-text path unchanged, kwargs win when ambiguous, limit clamping, cache key consistency. - `tests/imports/test_search_match_endpoints.py` (7 tests) — end-to-end through Flask test client: Deezer endpoint passes kwargs not joined query; karaoke buried at bottom for all three sources; legacy query param still works without rerank. # Verification - 2441 full suite passes (+71 from baseline 2370) - 0 failures (the prior watchdog flake fix held) - Ruff clean across all changed files - JS parses clean (`node -c webui/static/helper.js`) # Architectural standards followed - **Logic at the right boundary.** Query construction lives in the client (every caller benefits from one change). Rerank lives in a neutral module (`core/metadata/relevance.py`) over the canonical `Track` dataclass — works for any source, not Deezer- specific. - **Explicit > implicit.** Every scoring rule has its own named function. Pattern tables are module-level constants tests can introspect. - **Scope discipline.** Audited every Deezer search call site; fixed the user-reported one + the consistent siblings. Did NOT speculatively normalise every Deezer call across the codebase. - **Backward compat.** Free-text `query=` callers untouched. Kwargs added to existing client method signature with safe defaults. - **Tests pin contract at correct boundary.** Pure-function rerank tests don't mock anything; client-query tests stub at `_api_get`; endpoint tests run through the real Flask app. |
3 days ago |
|
|
abab663eb7 |
Auto-import: album duration = album total + conservative re-import UPDATE path
Two pre-existing parity gaps in `record_soulsync_library_entry` that the prior parity commits left untouched. Both close real holes between auto-import writes and what the soulsync_client deep scan would have produced. # Gap 1: Album duration was the first-imported track's duration `record_soulsync_library_entry` is called once per track. The album INSERT only fires for the FIRST track of a new album (subsequent tracks find the album row already exists). The INSERT was passing `duration_ms` — `track_info["duration_ms"]` — as the album's `duration` column. That's the duration of one track, not the album total. Compare to `SoulSyncAlbum.duration` in soulsync_client which is `sum(t.duration for t in self._tracks)`. Fix: - Worker computes `album_total_duration_ms = sum(...)` across every matched track and threads it onto context as `album.duration_ms`. - side_effects reads that value (or falls back to the per-track duration for legacy non-auto-import callers) and writes it as the album row's `duration`. # Gap 2: Re-imports of the same artist/album were insert-only When the SELECT-by-id or SELECT-by-name found an existing soulsync artist or album row, the function skipped completely — no UPDATE path. Meant: artist genres / thumb / source-id reflected ONLY whatever the FIRST imported album supplied, never refreshing as more albums by that artist landed. Ten more imports later, the artist row still held whatever the first random import wrote. Conservative fix: when an existing row matches, run an UPDATE that fills only the columns whose current value is NULL or empty. Never overwrites populated values — protects manual edits + enrichment-worker writes the same way the scanner UPDATE path preserves enrichment columns. Implementation note: the empty-check happens in Python, NOT SQL. Initial pass tried `COALESCE(NULLIF(col, ''), NULLIF(col, 0), ?)` but SQLite's `NULLIF(text_col, 0)` returns the original text value instead of NULL — different types, no coercion. So the SQL-only conditional was unreliable on text columns. New helper does `SELECT cols FROM table WHERE id`, compares each column in Python, and emits UPDATE clauses only for the ones that need filling. Allowlist defense: f-string column names go through `_SOULSYNC_FILLABLE_COLUMNS` validation before interpolation. Misuse adding new columns without an allowlist update fails closed (logger.debug + skip). # Tests added (4) - `test_album_duration_uses_album_total_not_single_track` — album with single-track context carrying explicit `album.duration_ms = 2_500_000` writes 2_500_000 to the album row, not the per-track 200_000 fallback. - `test_re_import_fills_empty_artist_fields` — first import lands artist with empty thumb + empty genres; second import for same artist with thumb + genres present updates the existing row. - `test_re_import_does_not_clobber_populated_artist_fields` — first import writes rich genres + thumb; second import with worse / different metadata leaves the existing row untouched. - `test_re_import_fills_empty_source_id_when_missing` — first import had no source artist ID; second import does — fills the empty `spotify_artist_id` column on the existing row. # Verification - 10/10 side-effects tests pass (including 4 new + 4 from prior parity commit + 2 history/provenance) - 217 imports tests pass (no regression) - 2369 full suite passes (+4 from prior, +22 PR-total from baseline 2347) - 1 pre-existing flake (`test_watchdog_warns_about_stuck_workers`, passes in isolation, unrelated) - Ruff clean |
4 days ago |
|
|
f628009ab4 |
Auto-import: aggregate GENRE tags onto artists row + harden ISRC/MBID types
Cin pre-review followup. Two small parity gaps the prior commits left
open:
# 1. Genre tags land on the standalone artists row
`soulsync_client._scan_transfer` aggregates the GENRE tag across every
track in an album and surfaces it on `SoulSyncAlbum.genres` (which the
DatabaseUpdateWorker writes to the artists+albums row). Auto-import
was hardcoding `'spotify_artist': {'genres': []}` so the imported
artists row landed with empty genres — felt hollow compared to a
Plex/Jellyfin scan, which both pull genres from their respective APIs.
Fix:
- `_read_file_tags` now reads the GENRE tag (mutagen easy mode handles
MP3/FLAC/M4A consistently; some files carry multiple genres so it's
always returned as a list).
- `_process_matches` aggregates genres from each matched file's tags
into a deduped insertion-order list. Dedup is case-insensitive but
preserves original casing — so "Hip-Hop, Rap, Trap" reads naturally
in the JSON column instead of "hip-hop, rap, trap".
- Worker context's `spotify_artist['genres']` carries the aggregated
list, which `record_soulsync_library_entry` already filters via
`core.genre_filter.filter_genres` and writes to the artists row.
# 2. Defensive str() cast for ISRC + MBID
`_build_album_track_entry` already coerces ISRC + MBID to string today
(via `str(isrc) if isrc else ''`). But if a future metadata-source
client returns int / None for either ID, the worker would propagate
the wrong type and side_effects.py's `.strip()` would AttributeError.
Cheap insurance: explicit `str()` cast in the worker before assignment
to track_info. Future-proofs against client drift.
# Tests added (3, in test_auto_import_context_shape.py):
- `test_context_aggregates_genres_from_track_tags` — multi-file
album with overlapping genre lists produces deduped, insertion-
ordered, original-case-preserved result. Stubs `_read_file_tags`
with monkeypatch so we don't need real audio.
- `test_context_genres_empty_when_no_tags` — files without GENRE
tag → empty list. Standalone library write handles gracefully
(genres column stays empty / NULL).
- `test_context_isrc_mbid_coerced_to_string` — hostile types
(int 12345678, None, int 999) coerced to safe strings before
reaching track_info.
# Verification
- 14/14 context-shape tests pass (11 prior + 3 new)
- 213 imports tests pass (no regression)
- 2365 full suite passes (+3 from prior, +18 PR-total)
- 1 pre-existing flake (`test_watchdog_warns_about_stuck_workers`,
passes in isolation)
- Ruff clean
|
4 days ago |
|
|
ec7da89434 |
Auto-import: surface artist source-id from metadata search response
Cin pre-review followup to the standalone library parity commit. The
prior commit fixed `spotify_artist['id']` from the wrong copy-paste
value (`identification['album_id']`) to read from
`identification['artist_id']`, but the identification dict produced
by `_search_metadata_source` and `_search_single_track` never set
`artist_id` — both extracted artist NAME from the search response
and discarded the source ID sitting right next to it. Net effect of
the prior commit: artists row source-id stayed NULL, just for a more
honest reason than before.
Now properly extracted:
- `_search_metadata_source` reads `best_result.artists[0]['id']`
alongside the artist name and returns it on the identification dict
as `artist_id`.
- `_search_single_track` does the same for single-track identification.
- `_identify_single`'s tag-based-confidence path forwards
`result.get('artist_id')` so the artist source-id propagates even
when high-confidence local tags override the search result's name.
Result: identification dict now carries `artist_id` whenever the
metadata source returned an artist with an ID. The worker context
already plumbs it onto `spotify_artist['id']` and
`spotify_album['artists'][0]['id']`, so the standalone library write
finally populates `<source>_artist_id` on the artists row.
Tests added (3, in `test_auto_import_context_shape.py`):
- `test_context_artist_id_uses_identification_artist_id` — when the
identification dict carries `artist_id`, context propagates it
onto `spotify_artist['id']` AND
`spotify_album['artists'][0]['id']`. Pins that the prior copy-
paste bug (artist['id'] = album_id) doesn't return.
- `test_context_artist_id_is_empty_when_identification_missing_it` —
fallback case (filename-only identification): context gets empty
string, NOT album_id. Honest failure mode.
- `test_search_metadata_source_extracts_artist_id_from_dict_artist`
— black-box test of `_search_metadata_source`: feed it a
spotify-shaped result with `artists[0]['id']` and verify
identification dict carries it forward.
Verification:
- 11/11 context-shape tests pass (8 prior + 3 new)
- 210 imports tests pass (no regression)
- 2362 full suite passes (+3 from prior commit, +15 PR-total)
- 1 pre-existing flake (`test_watchdog_warns_about_stuck_workers`,
passes in isolation)
- Ruff clean
|
4 days ago |
|
|
8493be207e |
Auto-import: SoulSync standalone library writes server-quality rows
# Background
SoulSync standalone is meant to be a full replacement for Plex /
Jellyfin / Navidrome — files imported via auto-import (or any other
import path) should land in the database with the same field richness
a media-server scan would write. They weren't.
# Gaps fixed
The auto-import worker built a context dict for each track and handed
it to `_post_process_matched_download` (the same callback the regular
download flow uses). That dict was missing three things downstream
needed:
1. **No `source` field anywhere.** `record_soulsync_library_entry`
reads `get_import_source(context)` to pick the source-aware ID
columns (`spotify_track_id` / `deezer_id` / `itunes_track_id` /
etc.) on the artists / albums / tracks rows. With no source, the
resolver returned an empty string → `get_library_source_id_columns("")`
returned an empty dict → the `UPDATE tracks SET <source>_id = ?`
blocks were silently skipped. Result: every auto-imported track
landed with NULL on every source-id column. Watchlist scans
(which match by stable source IDs to detect "this track is already
in library") couldn't recognise these rows and would re-download
them on the next pass.
2. **No `_download_username='auto_import'`.** Both
`record_library_history_download` and `record_download_provenance`
default to "Soulseek" when no `username` is in the context. Every
staging-folder import was being labelled as a Soulseek download
in library history + provenance — false signal in the UI.
3. **No per-recording IDs (`isrc`, `musicbrainz_recording_id`) on
track_info.** The Navidrome scanner already writes
`musicbrainz_recording_id` directly to the tracks row when present.
Picard-tagged libraries always carry MBID; metadata sources
(Spotify via MusicBrainz enrichment, Deezer, etc.) carry ISRC.
Auto-import had access to both via the metadata-source response
but didn't propagate them — so the soulsync row went in with
NULL on both columns.
# Changes
**`core/auto_import_worker.py` — `_process_matches`:**
- Top-level `'source': source` (from `identification['source']`)
- `'_download_username': 'auto_import'`
- `track_info['isrc']`, `track_info['musicbrainz_recording_id']` —
pulled from the per-track payload returned by the metadata source
- `track_info['album_id']` — back-reference so source-aware ID
resolution works on sources whose API nests album under
`track.album.id` rather than `track.album_id`
- `spotify_artist['id']` now correctly carries the artist's source ID
(was `identification['album_id']`, a copy-paste bug from the
original implementation that made artist-id resolution fall back
to fuzzy matching)
- `spotify_album['artists'][0]['id']` carries artist source ID for
the same resolution path
**`core/imports/side_effects.py`:**
- `record_library_history_download` source_map: add
`"auto_import": "Auto-Import"` — tags imported tracks correctly
- `record_download_provenance` source_service: add
`"auto_import": "auto_import"` — provenance shows real source
- `record_soulsync_library_entry` track INSERT: now includes
`musicbrainz_recording_id` + `isrc` columns (matches
`insert_or_update_media_track`'s shape for Navidrome /
Plex / Jellyfin scans). Both default to NULL when not present.
# Behavior preserved
- Files still land in the same library template path (no path-build
change)
- Other media-server flows (Plex / Jellyfin / Navidrome users)
unaffected — `record_soulsync_library_entry` still gates on
`get_active_media_server() == "soulsync"`. Auto-import on those
servers continues to drop the file in the library folder + emits
`batch_complete` for the scan-trigger automation, same as before.
- Direct downloads (search → Download button) unaffected — they
already passed `source` + `username` correctly.
# Tests added
`tests/imports/test_auto_import_context_shape.py` (8 tests, new file):
- Worker context carries `source` for every metadata source
(parametrised across spotify / deezer / itunes / discogs)
- `_download_username='auto_import'` set unconditionally
- ISRC + MBID propagate from track payload to track_info when present
- ISRC + MBID default to empty string when absent (downstream
normalises to NULL at write time)
- track_info includes album-id back-reference
`tests/imports/test_import_side_effects.py` (4 new tests + 2 schema
column adds):
- `record_soulsync_library_entry` writes mbid + isrc columns when
present in track_info
- Deezer source maps to deezer_id column (regression case for
source-aware column resolver)
- `record_library_history_download` labels `_download_username=
'auto_import'` as "Auto-Import" not "Soulseek"
- `record_download_provenance` registers source_service as
"auto_import" not "soulseek"
# Verification
- 8/8 new context-shape tests pass
- 6/6 side-effects tests pass (4 new + 2 existing)
- 207 imports tests pass
- 2359 full suite passes (+12 from baseline 2347, no regressions)
- 1 pre-existing flake (`test_watchdog_warns_about_stuck_workers`,
passes in isolation, unrelated to this change)
- Ruff clean
|
4 days ago |
|
|
8a6ee7a2c7 |
Auto-import: bounded ThreadPoolExecutor + per-candidate UI state isolation
# Concurrency model Pre-refactor concurrency was emergent + unbounded: - The worker's `_run` thread called `_scan_cycle` every 60s, processing candidates synchronously in a for-loop. - The `/api/auto-import/scan-now` endpoint spawned a fresh `threading.Thread(target=_scan_cycle)` per click — extra parallel scan cycles on top of the timer. - Multiple "Scan Now" clicks during in-flight processing → multiple threads racing on `_processing_paths` / `_folder_snapshots` state, no upper bound on concurrent scanners. - `stop()` didn't wait for in-flight processing — could leave file moves / tag writes / DB inserts mid-flight. Refactor to the pattern Cin uses elsewhere (`missing_download_executor`, `sync_executor`, `import_singles_executor` all use `ThreadPoolExecutor(max_workers=3, thread_name_prefix=...)`): - **One scan thread** — both timer + manual triggers go through `trigger_scan()`, gated by a non-blocking `_scan_lock`. Duplicate triggers no-op instead of stacking parallel scanners. - **Bounded executor** — `ThreadPoolExecutor` (default 3 workers, configurable via `auto_import.max_workers`) runs per-candidate work. Each candidate runs to completion in its own pool thread; up to N candidates run in parallel. - `_scan_and_submit()` is fast — just enumeration + executor submit, returns immediately, doesn't block on per-candidate work. - `_process_one_candidate(candidate)` holds the per-candidate logic identical to the old for-loop body, lifted into a method so the pool can run multiple instances concurrently. - `_submitted_hashes` set + lock dedupes candidates across the timer + manual triggers so a candidate already queued / running doesn't get re-submitted. - `stop()` calls `executor.shutdown(wait=True)` — clean shutdown, no orphaned file ops. # Per-candidate UI state isolation The executor refactor opened two concurrency holes that the old sequential model masked. Both fixed in this commit: 1. **Scalar UI fields stomped across pool workers.** Pre-refactor `_current_folder` / `_current_status` / `_current_track_*` were safe under the sequential model — only one candidate processed at a time, so the fields tracked the in-flight one. With three pool workers writing the same fields, the polling UI saw garbage like "Processing AlbumA, track 7/14: SongFromAlbumB". Replaced with `_active_imports: Dict[hash, _ActiveImport]` keyed on folder_hash, gated by `_active_lock`. Each pool worker owns its own entry. Helpers `_register_active` / `_update_active` / `_unregister_active` / `_snapshot_active` are the only API. 2. **Stats counters not thread-safe.** `self._stats[k] += 1` is read-modify-write — under load, parallel pool workers drop increments. New `_stats_lock` + `_bump_stat()` helper wraps every mutation. `get_status()` reads under the same lock and returns a copy. # Endpoint change `/api/auto-import/scan-now` no longer spawns its own scan thread — calls `auto_import_worker.trigger_scan()` (which routes through the shared lock + executor). Multiple clicks while a scan is in flight no-op deterministically. Endpoint still wraps the call in a daemon thread so the HTTP response returns immediately even if the staging walk is slow. # Backward compat The scalar `_current_folder` / `_current_status` / `_current_track_*` fields are preserved as **read-only properties** that resolve to the FIRST active import. The existing `get_status()` payload still includes those fields populated from the first entry — single-import UIs (and the test fixture) keep working unchanged. New `active_imports` array exposes the full multi-candidate state for parallel-aware UIs. # Behavior preserved - Per-candidate identify / match / process logic byte-identical - Live-progress state preserved (per candidate now) - Stability gate / already-processed dedup preserved - `_record_in_progress` / `_finalize_result` UI rows preserved - Tag-based loose-file grouping unchanged # Behavior changes - Multiple albums process IN PARALLEL up to `max_workers` - "Scan Now" while scan in progress no-ops (was: spawned another) - `stop()` waits for in-flight pool work via `shutdown(wait=True)` - Auto-import card now lists each in-flight album (one line per active import) instead of a single shared progress line # UI `webui/static/stats-automations.js`: - Progress widget reads `active_imports` array, renders one line per in-flight album with per-candidate status / track index - Falls back to the legacy summary line when payload doesn't carry `active_imports` (older backend) - Per-row "live processing" lookup now matches by `folder_hash` through the array instead of by `folder_name` against scalars # Tests added (`tests/imports/test_auto_import_executor.py`) - Pool config: default max_workers=3, configurable via constructor + via `auto_import.max_workers` config, floors at 1 - Scan lock: 5 concurrent `trigger_scan()` calls run only 1 scan while lock held; releases properly so subsequent triggers run - Executor dispatch: 5 candidates → 5 process calls via the pool - Bounded parallelism: max_workers=3 caps at 3 concurrent; max_workers=2 caps at 2 - Cross-trigger dedup: candidate submitted in scan A doesn't get re-submitted by scan B while still in-flight - Graceful shutdown: `stop()` blocks until in-flight pool work finishes - Per-candidate state isolation: 2 parallel workers updating their own candidate state don't interfere — each candidate's track_index / track_name / folder_name reads back exactly as written for that hash - `get_status()` returns coherent `active_imports` array with one entry per in-flight candidate; aggregate top-level `current_status` is 'processing' when any entry is processing - Unregister removes only that candidate, others stay visible - Stats counter thread-safety: 1000 parallel bumps land at 1000 (the read-modify-write race regresses without the lock) - `get_status()` stats snapshot is a copy, not a live reference # Verification - 17 new tests pass (executor + state isolation) - 2347 full suite passes (1 pre-existing flaky test — `test_watchdog_warns_about_stuck_workers` — passes in isolation, unrelated) - Ruff clean |
4 days ago |
|
|
e11786ee40 |
Auto-import matching: fix Deezer source classification + bump tolerance
User report: all 6 staging candidates failing with "Could not match
tracks to album tracklist" despite identification correctly resolving
each album. 18 properly-tagged Chris Brown F.A.M.E. tracks, 21
properly-tagged Mr. Morale tracks, etc. — every match attempt
rejected by the duration sanity gate.
Root cause: I had Deezer in `_SECONDS_DURATION_SOURCES`, assuming
Deezer's `duration` field was raw seconds (which the API returns).
But `DeezerClient.get_album_tracks` already converts seconds → ms
INTERNALLY (`'duration_ms': item.get('duration', 0) * 1000`) before
the value reaches the matcher. My helper saw `source='deezer'` →
multiplied by 1000 again → 255000 ms became 255,000,000 ms (70 hours).
Every track-file pair failed the gate by a factor of 1000×.
Diagnostic chain that got me there:
1. Added `[Album Matching] No matches: X files, Y tracks, Z
duration-rejected, W below threshold` summary log so future "0
matches" reports surface the rejection reason.
2. Fixed the helper's logger from `logging.getLogger(__name__)` (which
resolves outside the soulsync handler tree → invisible in app.log)
to `get_logger("imports.album_matching")` (under the namespace the
file handler watches).
3. Added per-rejection-type diagnostic showing actual file vs track
duration values + raw track keys + source.
That third diagnostic surfaced `track 'United In Grief' resolved=255000000
(raw duration_ms=255000, raw duration=None, source='deezer')` —
making the bug obvious.
Fixes:
- Moved Deezer from `_SECONDS_DURATION_SOURCES` to
`_MS_DURATION_SOURCES`. Comment documents WHY (the client converts
before returning) so a future reader doesn't "fix" the
classification back the wrong way.
- Bumped `DURATION_TOLERANCE_MS` from 3000 → 10000 (3s → 10s) to
match Picard ~7s / Beets ~10-15s / Plex ~10s industry baselines.
3s was a defensive copy of the post-download integrity check
threshold but that's a different problem (catching truncated
downloads, not identifying recordings across remasters/encodings).
- `_track_duration_ms` magnitude heuristic kept as fallback for
unknown / missing source (mocked test data without `source` field).
- Added `Match aborted` warnings at the three earlier silent return
points in `_match_tracks` (no client, no album_data, no tracks)
so future "Could not match" reports show WHICH step bailed.
- Added per-run diagnostic in `match_files_to_tracks` that logs the
first duration rejection's actual values — surfaces unit mismatches
+ drift problems without spamming N×M lines per run.
Test changes:
- `test_deezer_seconds_duration_converted_to_ms` renamed +
rewritten as `test_deezer_already_normalised_to_ms_by_client`
to pin the actual contract (matcher receives ms from the Deezer
client, takes as-is).
- `test_track_duration_source_aware_dispatch` updated — Deezer test
case now uses ms input + expects ms output.
- New `test_raw_deezer_seconds_falls_back_to_magnitude_heuristic`
pins the rare edge case where raw Deezer items WITHOUT `source`
reach the matcher (no client conversion path) — heuristic catches
it.
Verification:
- 179 import tests pass after changes
- Live test: all 6 user staging candidates now matching at 95-100%
confidence
- Multi-disc Mr. Morale lands with proper Disc 1 / Disc 2 / Disc 3
folder structure
- Picard-tagged libraries hit MBID fast paths (verified earlier)
- Tracks process in parallel via the existing scan-now thread spawn
(next commit refactors this to a proper bounded executor)
|
4 days ago |
|
|
a478747a89 |
Auto-import: dedup on folder_hash, not path — fixes silent-skip bug
User reported nothing happening on a chaotic staging root despite 6 candidates being detected. Logs showed "Processing folder" for 3 of 6 — the other 3 were silently skipped. Root cause: The previous commit (`a9a6168`) introduced loose-file grouping — multiple `FolderCandidate` objects can now share a `path` (each album group at the staging root has the same parent directory but its own audio_files + folder_hash). But two pieces of dedup machinery still keyed on `path`: - `_processing_hashes` (was `_processing_paths`) — runtime set of in-flight candidates. Path-keyed → first sibling marks the path, second + third siblings hit "already in flight" and skip. - `_folder_snapshots` — mtime cache for stability check. Path-keyed → siblings overwrite each other's mtimes, stability check returns unreliable results for whichever sibling lost the write race. Both kept track of an attribute that was previously unique-per-path (one candidate per directory) but my refactor broke that invariant without updating the dedup keys. Net effect: only the first candidate per directory ever got processed in a chaotic-root scenario. Fix: - Renamed `_processing_paths` → `_processing_hashes` set, keyed on `candidate.folder_hash`. Hash is unique per candidate by construction (different audio_files lists hash differently). - `_folder_snapshots` retyped + rekeyed to `folder_hash`. Siblings no longer overwrite each other's mtime tracking. - Both touched in lockstep — comments document why path-keyed dedup breaks for sibling candidates. Test added (`test_sibling_candidates_have_unique_folder_hashes`): verifies 3-album loose root produces 3 candidates with distinct folder_hashes. If a future change breaks the invariant, the test fails before the silent-skip regression ships. Verification: - 178 imports tests pass (8 new this commit + 170 pre-existing this branch) - Ruff clean - Still scoped to import flow |
4 days ago |
|
|
a9a6168568 |
Auto-import scanner: group loose files by album + always recurse subfolders
Two related bugs in `AutoImportWorker._scan_directory` surfaced
during real-world testing of the chaotic-staging case (user dropped
loose tracks from multiple albums at staging root, alongside
intact album subfolders):
Bug 1 — Loose files bundled into one fake "album"
When loose audio files existed at a level, the scanner built ONE
FolderCandidate from all of them regardless of their album tags.
On a chaotic staging root with tracks from 3+ different albums,
the identifier picked the most-common album tag and the matcher
left every other album's tracks unmatched (or mis-attributed via
filename + position guessing).
Bug 2 — Subfolders silently ignored when root has loose files
The scanner only recursed into non-disc subfolders when there were
NO loose files at the parent level. So a layout like:
Staging/
loose1.flac (processed via the loose-files path)
Other Album Folder/ (silently ignored — never scanned)
would skip the album subfolders entirely. Common pattern when a
user moves a few tracks out of an album folder while leaving the
rest of the parent album folder intact, OR when other album
folders sit alongside a partially-extracted album.
Fix:
`_build_loose_file_candidates` (new method) reads each loose file's
`album` tag and groups by normalised album name. Each group becomes
its own FolderCandidate so a chaotic staging root produces one
candidate per album — identifier + matcher run cleanly per album.
Untagged loose files become individual single candidates. Disc
folders at the same level attach to whichever loose-file group's
album tag matches the disc-folder tracks; standalone disc folders
(no matching loose group) get their own multi-disc candidate.
The scanner now ALSO always recurses into non-disc subdirectories,
even when the current level has loose files. So album subfolders
sitting beside loose tracks get processed independently in their
own recursive scan.
Behavior preservation:
- Single-album loose-files staging (every file shares one album tag,
no parallel disc folders) → one FolderCandidate, identical to
pre-fix behavior. Pinned by `test_single_album_loose_files_still_one_candidate`.
- Disc-only directory (no loose files, only Disc 1/Disc 2 subdirs)
→ one multi-disc FolderCandidate, identical to pre-fix. Pinned
by `test_disc_only_directory_still_works`.
7 new tests in `tests/imports/test_auto_import_scanner_grouping.py`:
- Multiple-album loose root → multiple candidates
- Untagged loose files → individual singles
- Single-album loose-files regression guard
- Subfolders recursed even when root has loose files
- Disc folder attaches to matching loose group by album tag
- Disc folder with no matching loose group → standalone candidate
- Disc-only directory regression guard
All write real FLACs via mutagen + exercise `_scan_directory`
end-to-end (no mocking the tag reader — proves the production
read path works).
Verification:
- 7 new tests pass
- 2328 full suite passes (+7 new), 1 pre-existing flaky timing test
unrelated to this PR
- Ruff clean
- All changes still scoped to import flow — download flow byte-
identical
|
4 days ago |
|
|
f2cd95e0f1 |
Auto-import polish: real-file tag reader test, source-aware duration, pin consolation
Cin-pass on the MBID/ISRC fast-paths + duration-gate work.
Three small but real gaps closed.
Gap 1 — Real-file tag reader integration test
(tests/imports/test_auto_import_tag_reader_real_files.py, 6 tests):
The matcher unit tests use dict fixtures, which prove the algorithm
handles the right shapes once tags are read. They DON'T prove the tag
reader itself extracts the right values from real files. Mutagen's
easy-mode key normalisation (across FLAC / MP3 / M4A) is the exact
spot a future mutagen version could silently drift and break the
fast paths in production while every unit test stays green.
These tests write real FLAC files via mutagen (using the same
`_make_minimal_flac` pattern from `test_album_mbid_consistency.py`)
and assert `_read_file_tags` extracts:
- Picard's `MUSICBRAINZ_TRACKID` (lowercase normalisation in reader)
- `ISRC` (uppercase normalisation in reader; matcher strips
formatting at compare time)
- "track/total" parsing (TRACKNUMBER='5/12' → 5)
- Duration via `audio.info.length` from synthesised STREAMINFO
- Graceful empty-default return for tagless files
- Graceful empty-default return for invalid audio (not a crash)
Acknowledged gap (carried forward): MP3 + M4A integration coverage
not added — mutagen docs say easy-mode normalisation is identical
across all three formats, but only FLAC is pinned here. Followup
candidate.
Gap 2 — Source-aware duration dispatch
(core/imports/album_matching.py, 4 tests in test_album_matching_exact_id.py):
The previous `_track_duration_ms` helper used a magnitude heuristic
("anything below 30000 is seconds, convert × 1000") to decide
whether a track's duration was in seconds or ms. That worked for
typical tracks but had a real edge case: an actual sub-30-second
Spotify track (intros, interludes, skits) would be detected as
seconds and converted to 8.5 hours, breaking the duration sanity
gate.
Replaced with deterministic source-aware dispatch:
- Spotify / iTunes / Qobuz / HiFi / Hydrabase → ms (canonical)
- Deezer / Discogs / MusicBrainz → seconds, × 1000
- Tidal classified as ms (album-tracks endpoint convention; flagged
in code comment as needing real-world verification — defensive
if wrong)
- Magnitude heuristic kept as fallback for unknown / missing source
(mocked test data without source field)
Tests pin all four paths: confirmed-ms source, confirmed-seconds
source, unknown source falls back to heuristic, and the regression
case (sub-30s real track on a known-ms source — must not be
× 1000-converted).
Gap 3 — Cross-disc consolation rationale
(tests/imports/test_album_matching_helper.py, 1 test):
The `CROSS_DISC_POSITION_WEIGHT = 0.05` magic number had no test
proving it was load-bearing. Anyone could have set it to 0 thinking
"strict matching is better" without realising it would silently
break a real scenario.
New test (`test_cross_disc_consolation_is_load_bearing_for_imperfect_titles`)
constructs the exact case the consolation exists for: file has the
right title spelling but the metadata source returns a slightly-
different version (e.g. "Auntie Diaries" file vs "Auntie Diaries
(Remix)" track), AND the file's disc tag is wrong while the track
number agrees. Title sim ~0.78 × 0.45 = ~0.35 (below
MATCH_THRESHOLD 0.4). Without the 5% consolation → file goes
unmatched. With it → ~0.40, just clears.
The test doesn't justify "why 0.05 specifically" — that's still a
tuned knob, not a measured value. But it forces a deliberate
decision if someone wants to drop it: failing this test gives them
the "you broke imperfect-title cross-disc matching" message
explicitly.
Verification:
- 10 new tests across 3 files, all pass
- 35 album-matching tests total now (including pre-existing 17 +
18 fast-path)
- Full suite: 2321 passed, 1 pre-existing flaky timing test
(`test_watchdog_warns_about_stuck_workers` — passes in isolation,
fails only in full-suite runs, unrelated to this PR)
- Ruff clean
- All changes still scoped to import flow — download flow byte-
identical (verified by grep on every changed file)
|
4 days ago |
|
|
3246490800 |
Auto-import: MBID/ISRC fast paths + duration sanity gate
Brings the auto-import matcher to picard / beets / roon parity by reaching for the existing AcoustID-grade infrastructure (typed Album foundation, integrity check thresholds) and layering id-based exact matches on top of the fuzzy scorer. Picard-tagged libraries now land every track with full confidence on the first pass. Three layered phases in `core/imports/album_matching.match_files_to_tracks`: 1. **MBID exact match** — file has `musicbrainz_trackid` tag, source returns the same id → instant pair, full confidence, no fuzzy scoring. Picard's primary identifier; per-recording. 2. **ISRC exact match** — file has `isrc` tag, source returns the same id → same fast-path, slightly lower priority than mbid (isrc can be shared across remasters). Both ids normalised before compare (uppercase + strip dashes/spaces for isrc, lowercase for mbid). 3. **Duration sanity gate** — files in the fuzzy phase whose audio length differs from the candidate track's duration by more than `DURATION_TOLERANCE_MS` (3s, matching the post-download integrity check) are rejected before scoring runs. Defends against the cross-disc / cross-release / wrong-edit problem the integrity check used to catch only AFTER the file had already been moved + tagged + db-inserted. Tag reader (`_read_file_tags`) extended: - Reads `isrc` (uppercased, strip / / spaces normalisation deferred to matcher) - Reads `musicbrainz_trackid` as `mbid` (lowercased) - Reads `audio.info.length` and converts to `duration_ms` to match the metadata-source convention Metadata-source layer (`_build_album_track_entry`) extended: - Propagates `isrc` from top-level OR `external_ids.isrc` (spotify shape — would otherwise be stripped before reaching the matcher) - Propagates `musicbrainz_id` from top-level OR `external_ids.mbid` / `external_ids.musicbrainz` - Without this layer, fast paths would silently never fire in production even though unit tests pass — pinned by `test_album_track_entry_propagates_isrc_and_mbid_from_source` 18 new tests in `tests/imports/test_album_matching_exact_id.py`: - Direct: `find_exact_id_matches` with mbid, isrc, isrc normalisation, mbid > isrc priority, spotify-shape `external_ids.isrc`, no-id empty result, file-used-at-most-once - Direct: `duration_sanity_ok` within / outside tolerance, missing durations defer - End-to-end via `match_files_to_tracks`: mbid match short-circuits fuzzy scoring, id-matched files excluded from fuzzy phase, duration gate rejects wrong-disc collisions in fuzzy phase, normal matches pass through the gate, missing durations fall through, deezer seconds-vs-ms conversion, full picard-tagged 10-track album via mbid only - Production-shape: `_build_album_track_entry` propagates isrc + mbid from spotify-shape (`external_ids.isrc`) AND itunes-shape (top- level `isrc`) Verification: - 35 album-matching tests pass total (17 helper + 18 fast-path) - 23 multi-disc tests still pass after the extension (additive) - Full suite: 2311 passed (+18 new), 1 pre-existing flaky timing test failure (`test_watchdog_warns_about_stuck_workers` — passes in isolation, fails only in full-suite runs, unrelated to this PR) - Ruff clean For users: - Picard / Beets / Mp3Tag-tagged libraries (anyone who's organised their music) get instant perfect-confidence matches every time. - Soulseek-tagged downloads (which usually carry isrc when sourced via metadata-aware soulseekers) get the fast path too. - Naively-named files with no useful tags fall through to the improved fuzzy + duration-gated path — same correctness as before for the common case, much harder for the matcher to confidently pair the wrong file. - One step closer to standalone-DB feature parity with plex / jellyfin / navidrome scanners. Acoustid fingerprint fallback (for files with NO useful tags AND no MBID/ISRC) is the next followup PR. |
4 days ago |
|
|
f9f74ac511 |
Lift auto-import matching to testable helper + pin contracts
Cin-pass on the #524 + multi-disc fixes. Pre-merge polish. Lifts: `core/imports/album_matching.py` `AutoImportWorker._match_tracks` was a 100+-line method buried in a 1400-line class. Testing it required monkey-patching `_read_file_tags` + mocking the metadata client just to exercise the matching algorithm. Per Cin's "lift logic out of monolithic classes" pattern (same shape as the album-info builders / discography / quality scanner lifts), moved the dedup + scoring into `core/imports/album_matching.py` as pure functions over already-fetched data. Helper exposes: - Constants for every match weight (TITLE_WEIGHT, ARTIST_WEIGHT, POSITION_WEIGHT, NEAR_POSITION_WEIGHT, CROSS_DISC_POSITION_WEIGHT, ALBUM_WEIGHT, MATCH_THRESHOLD). Magic numbers killed. - `dedupe_files_by_position(audio_files, file_tags, *, quality_rank)` — position-keyed quality dedup. - `score_file_against_track(file_path, file_tags, track, *, target_album, similarity)` — pure per-(file, track) scorer. - `match_files_to_tracks(audio_files, file_tags, tracks, *, target_album, similarity, quality_rank)` — full matching with greedy best-per-track + first-come-first-serve over deduped files. Worker shrinks from 100 lines of inline algorithm to 8 lines that fetch tags + delegate to the helper. Tests added (26 new across 3 files): `tests/imports/test_album_matching_helper.py` (19 tests): - Constants pin: weights sum to 1.0, threshold above position-only - `dedupe_files_by_position`: quality wins, cross-disc preserved, tag-less files passed through, first-wins on equal quality - `score_file_against_track`: perfect-agreement = 1.0, position needs both disc+track, near-position only same-disc, missing artist tags handled, disc field aliases (Spotify/Deezer/iTunes), filename fallback when title tag missing - `match_files_to_tracks`: happy path, file used at-most-once, below-threshold left unmatched - Edge case Cin would flag: tag-less file with strong filename title matches multi-disc album track via title alone (perfect-name scenario works); tag-less file with weak filename title against multi-disc API correctly stays unmatched (the behavior delta from the disc-aware fix — pinned so future readers see it's intentional) `tests/test_import_album_match_endpoint.py` (3 tests): - Backend warning fires when source missing from match POST - No warning fires on the legit path (catches noisy-warning regression) - Endpoint actually forwards source/name/artist to the payload builder (catches "logging the right warning but doing the wrong lookup" regression) `tests/test_import_page_album_lookup_pattern.py` (4 tests): - Source-text guard for the import-page #524 fix in stats-automations.js. Until the file is modularized enough for a behavioral JS test (under the existing tests/static/*.mjs pattern), regex-based assertions pin: the `_albumLookup` field exists, the click handler reads from it, both card renderers populate it before emitting onclick, and the cache stores `source` per entry. Caveat documented in the test module docstring. Verification: - All 26 new tests pass. - Existing multi-disc tests (test_auto_import_multi_disc_matching.py) still pass after the lift — proves the helper is behavior-equivalent to the inline implementation it replaced. - Full suite: 2293 passed, 1 flaky-timing failure (test_library_reorganize_orchestrator.py::test_watchdog_warns_about_stuck_workers — passes in isolation, fails only in full-suite runs, pre-existing, unrelated to this PR). - Ruff clean. Notes for the reviewer: - The frontend stats-automations.js JS test is structural-only. Behavioral JS testing for that file requires modularizing the ~7k-line monolith first — out of scope for this fix. - The cross-disc 5% consolation bonus is a small behavior change for users with weak/missing tag info on multi-disc albums. Pinned explicitly in `test_tagless_file_with_weak_title_unmatched_in_multidisc` so the trade-off is visible: correct multi-disc matching wins over optimistic position-only matching that produced wrong-disc files. |
4 days ago |
|
|
c03edc3cb4 |
Auto-import: respect disc_number in dedup + match scoring
Caught while live-testing the #524 fix with kendrick lamar mr morale & the big steppers (3 discs). User dropped discs 1+2 loose in staging root + disc 3 in its own folder, every file perfectly tagged with disc_number/track_number/title — only 9 tracks ended up in the library, the rest got integrity-rejected and quarantined. Two related bugs in `AutoImportWorker._match_tracks`: 1. **Quality dedup keyed on track_number alone.** The dedup loop kept `seen_track_nums[track_number] = file` and dropped any later file with the same number, treating it as a quality duplicate. On a multi-disc release where every disc has tracks 1..N, that collapses the album to one disc's worth of files BEFORE the matcher runs. User's 18 loose disc-1+disc-2 files reduced to 9 before any title/disc info was even consulted. 2. **Match scoring ignored disc_number.** The 30% track-number bonus fired whenever `ft[track_number] == track_num` regardless of disc. File with tag (disc=2, track=6, "Auntie Diaries", 281s) got the full bonus matching API track (disc=1, track=6, "Rich Interlude", 103s) — wrong file → wrong destination → integrity check correctly rejected and quarantined the file. Same for tracks 7, 8, 9. Fix: - Dedup keys on `(disc_number, track_number)` tuples — multi-disc files with parallel numbering all survive. - Match scoring's 30% bonus only when BOTH disc AND track agree. Cross-disc same-track-number collisions get a small 5% consolation bonus so title similarity has to carry the match (covers cases where tag disc info is missing or wrong). - API track disc_number read from `disc_number` (Spotify) / `disk_number` (Deezer) / `discNumber` (iTunes) defaulting to 1. 4 new pinning tests in `tests/imports/test_auto_import_multi_disc_matching.py`: - 18-file 2-disc regression case (dedup preserves all) - (disc=2, track=6) file matches API (disc=2, track=6) track, not the disc-1 same-numbered track - Single-disc albums still match normally (no regression) - Quality dedup within a single (disc, track) position still picks higher-quality format (.flac over .mp3) Verification: - 2268 full pytest suite passes (+4 new), 1 skipped, 0 failed - Ruff clean Same branch as the #524 fix because both surfaced from the same import session — easier reviewer context if they ship together. |
5 days ago |
|
|
48aefbacdd |
Drop redundant `import sys` inside _auto_download_disabled
Ruff F811 — `sys` is already imported at module top (line 13). The local `import sys` inside `_auto_download_disabled` shadowed it needlessly. Caught by CI ruff check on the dev-nightly workflow. |
5 days ago |
|
|
950857ba40 |
ffmpeg gate also covers is_available — fixes the actual leak path
Previous commit split _check_ffmpeg into a side-effect-free
_locate_ffmpeg + the original auto-download _check_ffmpeg, and moved
__init__ to call _locate_ffmpeg. That alone wasn't enough — caught
the gap during a deeper audit:
is_configured() → is_available() → _check_ffmpeg() (with download)
The orchestrator registry, download engine, and the orchestrator's
own configured_clients() all probe is_configured() polymorphically at
boot. So when tests import web_server, the registry probes
youtube.is_configured() → is_available() → _check_ffmpeg() →
DOWNLOAD. My __init__ change didn't help because the registry boot
fires the same code path right after.
Real fix: gate the download branch inside _check_ffmpeg itself.
Returns False (and logs a warning) when running under pytest or when
SOULSYNC_NO_FFMPEG_DOWNLOAD=1. End users on a fresh install still get
auto-download on first real YouTube use (gate is off in production).
Container is unaffected (system ffmpeg via apt is found on PATH, the
download branch never runs).
Three detection paths in _auto_download_disabled():
- SOULSYNC_NO_FFMPEG_DOWNLOAD env var (explicit opt-out for CI /
build steps that want to disable outside pytest)
- PYTEST_CURRENT_TEST env var (set by pytest per-test — covers
in-test-body call path)
- 'pytest' in sys.modules (covers calls fired during pytest collection
/ import phase, BEFORE the per-test env var is set — which is
exactly when registry.py probes is_configured() at web_server
import time)
Verified by inspecting tools/ after a full suite run — empty (was
~388 MB after a single test_tidal_auth_instructions.py run before
the gate). Container behavior unchanged: shutil.which('ffmpeg')
returns /usr/bin/ffmpeg from the apt-installed package, so the
download branch is never reached anyway.
5 new pinning tests:
- pytest-in-sys.modules detection works
- PYTEST_CURRENT_TEST env detection works
- SOULSYNC_NO_FFMPEG_DOWNLOAD env detection works
- _check_ffmpeg returns False (no urlretrieve, no tools/ dir created)
when gate is on and ffmpeg is missing — pinned by trapping
urlretrieve to AssertionError so a regression blows up loud
- _locate_ffmpeg never triggers download or creates tools/ —
pinned by trapping both urlretrieve AND Path.mkdir on tools-prefixed
paths
2264 passed (+5), 1 skipped, 0 failed.
|
5 days ago |
|
|
70e1750948 |
Stop docker image bloat from auto-downloaded ffmpeg
kettui reported the dev image roughly doubled in size after a recent nightly build. codex investigation traced it back to: 1. nightly workflow runs `python -m pytest` before docker build 2. one of the new tests imports web_server (test_tidal_auth_instructions.py) 3. importing web_server constructs YouTubeClient 4. YouTubeClient.__init__ called _check_ffmpeg() — which auto-downloads a ~388 MB ffmpeg/ffprobe bundle into ./tools/ when system ffmpeg isn't on PATH (CI runner doesn't have it) 5. .dockerignore didn't exclude tools/ffmpeg or tools/ffprobe 6. docker `COPY . .` shipped the binaries 7. the immediately-following `chown -R /app` rewrote every file into a new layer — so the 388 MB payload got counted twice in image size three fixes: 1. .dockerignore — block the auto-downloaded binaries even if they leak into the workspace (tools/ffmpeg, tools/ffprobe, .exe variants, .zip and .tar.xz download archives). Defense-in-depth so a future regression in the test/import path can't bloat the image again. 2. youtube_client — split _check_ffmpeg into a side-effect-free _locate_ffmpeg (pure existence check) and the original auto- download _check_ffmpeg. __init__ now calls _locate_ffmpeg + logs a warning when missing instead of triggering download. is_available() and the actual download dispatch paths still call _check_ffmpeg — so end users still get auto-download on first YouTube use, but `import web_server` doesn't drag a 388 MB binary into the workspace. 3. Dockerfile — replaced `COPY . .` + `chown -R /app` with `COPY --chown=soulsync:soulsync . .` + a scoped chown on just the runtime mount-point dirs. eliminates the layer that duplicated the entire /app tree just to flip ownership bits, so even legit workspace content isn't double-counted in the image. Combined effect: image size returns to baseline + future ffmpeg leaks can't bloat it. Inside the container nothing changes — the Dockerfile already installs system ffmpeg via apt, so YouTube downloads find it on PATH on first use and the auto-download path never fires. 2259 passed, 1 skipped, 0 failed. |
5 days ago |
|
|
e20994e1c7 |
Manual picks: stream results, don't auto-retry, fix stuck-at-0%
Three follow-on fixes to the manual-search candidates modal once people started actually using it: 1. NDJSON streaming. Manual search waited for every source to return before showing anything. Now streams one event per source as each completes — header line, source_results per source, done terminator. Frontend appends rows incrementally via response.body.getReader(). 2. Manual picks no longer auto-retry on failure. New _user_manual_pick flag set on the task in /download-candidate. Both monitor retry paths (not-in-live-transfers stuck + Errored state) bail on the flag. Surfaces the failure to the user instead of silently picking a different candidate via fresh search. 3. Non-Soulseek manual picks (youtube/tidal/qobuz/hifi/deezer/ soundcloud/lidarr) no longer stuck at "downloading 0%" forever. The live_transfers IF branch now marks manual-pick tasks failed directly when the engine reports Errored, instead of deferring to the monitor (which bails on manual picks). Engine fallback in else branch covers the rare race where the orchestrator's pre-populated transfer lookup is missing the entry. Plus a deadlock fix discovered along the way: the new failure path synchronously called on_download_completed while holding tasks_lock, which itself re-acquires the same Lock — non-reentrant threading.Lock self-deadlocked the polling thread. While wedged, every other endpoint that needed the lock (including /candidates → other failed rows couldn't open modals) hung waiting. Moved completion callbacks onto a daemon thread so the lock releases first. Plus failed/not_found/cancelled rows are now ALWAYS clickable (not just when the auto-search cached candidates) — the modal carries the manual search bar, which is the user's recourse for empty results. Plus manual download worker now runs on a dedicated thread instead of competing with the batch's 3-worker missing_download_executor pool — saturated batches no longer queue manual picks indefinitely. All scoped to manual picks via the _user_manual_pick flag — auto attempt flow byte-identical to before. Engine fallback gated on the flag too so auto attempts in the else branch keep the original do-nothing behavior (safety valve handles the stuck-forever case). Also dropped _handle_failed_download from web_server.py — defined but had no callers (dead code). 17 new unit tests pin the gate behavior: - engine fallback: Errored/Cancelled/Succeeded/InProgress transitions, manual-pick gate, terminal-state skip, soulseek skip, missing download_id skip, engine returning None, orchestrator exception - monitor: manual-pick skips not-in-live-transfers retry + Errored retry - IF-branch end-to-end: Errored marks failed, "Completed, Errored" hits failure branch, auto attempts defer to monitor Manual-search endpoint tests rewritten for NDJSON: 11 cases (validation, single-source dispatch, parallel "all" dispatch, one-event-per-source streaming shape, unconfigured-source skip + reject, header metadata, per-source exception isolation). Full suite 2259 passed, 1 skipped. |
5 days ago |