mirror of https://github.com/Nezreka/SoulSync.git
dev
main
release/2.5.3
fix/disable-beatport-features
johnbaumb-discover-redesign
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.1
2.2
2.3
2.4.0
2.4.1
2.4.2
2.5.0
2.5.1
2.5.2
2.5.3
v0.65
${ noResults }
340 Commits (main)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
2f284efa57 |
Retag now re-embeds LYRICS tag instead of leaving it empty
Discord report (netti93). The download flow runs `enhance_file_metadata` (clears all tags) then `generate_lrc_file` (writes .lrc sidecar AND embeds USLT). The retag flow only ran the first half — `enhance_file_metadata` cleared USLT and there was no follow-up to restore it. Two coordinated fixes (no new setting per kettui scope discipline — user described it as "might even be an idea," consistency was the load-bearing ask). Fix 1 — retag calls generate_lrc_file after enhance `core/library/retag.py:execute_retag` now invokes `deps.generate_lrc_file` right after the `enhance_file_metadata` call, mirroring the download pipeline. New `generate_lrc_file` field on `RetagDeps`, defaults to None for backward compat with any test caller that builds RetagDeps without it. Web_server's `_build_retag_deps()` factory wires in the real `core.metadata.lyrics.generate_lrc_file`. Placement matters — runs BEFORE `safe_move_file` so the helper sees the audio file at its current path with its existing sidecar (which retag hasn't moved yet). After the embed, the audio file gets moved with USLT now present; the sidecar move step that follows is unaffected. Fix 2 — create_lrc_file re-embeds from existing sidecar `core/lyrics_client.py:create_lrc_file` used to early-return True when an .lrc / .txt sidecar already existed (skipping the LRClib fetch). For the retag case the sidecar is already there, so the shortcut hit and USLT was never re-written. Now the helper reads the existing sidecar and calls `_embed_lyrics` with its content before returning. Empty / unreadable sidecars short-circuit silently — defensive, no crash. Download flow unaffected because no sidecar exists at fetch time. 7 boundary tests pin: existing .lrc triggers re-embed, existing .txt triggers re-embed, empty sidecar skips embed, unreadable sidecar swallows error, no sidecar falls through to LRClib (download path regression guard), RetagDeps.generate_lrc_file field accepted, field optional for backward compat. Full suite: 3120 passed. |
9 hours ago |
|
|
30f017d1f0 |
Stop writing TRCK as "6/0" when album total_tracks is unknown
Discord report (netti93): downloaded album tracks were tagged with
TRCK = "6/0" instead of "6/13" when source data was incomplete. The
retag tool wrote correct "6/13" because core/tag_writer.py already
handled the case.
Trace: core/metadata/enrichment.py:105 formatted unconditionally as
f"{track_number}/{total_tracks}" and many album-dict construction
sites pass total_tracks: 0 (per types.py, 0 means "unknown" — not a
real count). That 0 propagated straight to disk.
Fix at the consumer boundary so every album-dict constructor stays
unchanged. Lifted to pure helper
core/metadata/track_number_format.py:format_track_number_tag that
drops the /N suffix when total is 0 / None / negative — emits just
"6" instead. Matches retag's behavior + ID3 spec convention (TRCK
can be "N" or "N/M"). MP4 trkn tuple gets the same treatment via
format_track_number_tuple returning (6, 0) per spec's "unknown
total" marker.
Wired into all three format-write sites in enrichment.py: ID3 (TRCK),
Vorbis (tracknumber), MP4 (trkn). When source data has correct
total_tracks (album downloads via the metadata-source pipeline,
retag flow), behavior unchanged — still writes "6/13".
16 boundary tests pin every shape: known total / zero total / none
total / none track / zero track / negative inputs / string coercion
/ unparseable strings / floats truncate.
Full suite: 3113 passed.
|
9 hours ago |
|
|
9cc09118bf |
AcoustID scanner: multi-candidate match + duration guard + multi-value retag
Closes #587. Three coordinated fixes per codex's diagnosis. AcoustID verification gate left intact — these fixes target the upstream scanner false-positive surface plus a separate retag-path gap. Bug 1 — scanner used recordings[0] as authoritative `core/repair_jobs/acoustid_scanner.py:_scan_file` only checked the top fingerprint match's metadata. AcoustID often returns multiple recordings per fingerprint (sample collisions, multi-MB-record cases) and the wrong-credited recording can outrank the right- credited one. Foxxify case 2 (Nana / Nana): top match credited the wrong artist while a lower-ranked candidate matched the user's expected metadata exactly. Lifted the verifier's all-candidates check to a shared pure helper `core/matching/acoustid_candidates.py:find_matching_recording`. Both verifier and scanner can now ask "given these candidates, does ANY of them match expected (title, artist)?" with the same contract. Scanner suppresses the finding when any candidate matches. Bug 2 — no duration check guards against fingerprint hash collisions Foxxify case 3: 17-minute mashup edit fingerprinted to a 5-minute late-70s Japanese hiphop track (different songs, fingerprint hash collision on a sampled section). Scanner had no signal to detect this and would have recommended retagging the 17-min file as the 5-min track. `duration_mismatches_strongly` in the same helper module flags drifts beyond max(60s, 35%). Scanner now skips findings when the candidate's duration disagrees strongly with the file's expected duration. Loaded duration via the existing tracks SQL (added `t.duration` to the SELECT). Returns False when either side is unknown — no behavior change for older rows without duration data. Bug 3 — scanner retag bypassed multi-value ARTISTS tag setting `core/repair_worker.py:_fix_wrong_song` called `write_tags_to_file` with single-string artist updates. The writer only wrote TPE1 (single string) and never read the user's `metadata_enhancement.tags.write_multi_artist` config. Multi-value ARTISTS tags got stripped on every retag, contradicting the post-download enrichment pipeline's behavior. Per codex's pick (option B over routing through enhance_file_metadata), extended `write_tags_to_file` with an optional `artists_list` parameter. Each format-specific writer respects the config flag the same way enrichment.py does: - ID3: TPE1 stays as joined display string + TXXX:Artists multi-value - Vorbis/Opus/FLAC: `artist` display string + `artists` multi-value key - MP4: \xa9ART as list when on, single string when off Scanner retag derives the per-artist list by splitting AcoustID's credit through the existing `split_artist_credit` helper (same separators the matching layer already uses). Backward compatible: callers that don't pass `artists_list` get the exact same single-string write as before. No regression for the write_artist_image button or any other tag_writer caller. 15 tests on the candidate helper + duration guard. 13 tests on the tag_writer multi-value path (write/skip/single/ no-list cases for FLAC + the config-gate helper). 4 new scanner regression tests pinning lower-ranked candidate suppression, no-suppression when no candidate matches, duration mismatch skip, no-skip when duration matches. Existing scanner tests updated for the new 11-column SQL select (added duration column to fake schema + test row tuples). Full suite: 3097 passed. Ruff clean. |
10 hours ago |
|
|
0aa18b0180 |
Cross-script artist aliases: include canonical name + non-strict fallback
Closes #586. Follow-up to #442 — Cyrillic / kanji canonical names weren't bridging cross-script comparisons. Reporter case: "Dmitry Yablonsky" tracks quarantined as audio mismatch with file identified as "Русская филармония, Дмитрий Яблонский" (4% artist sim) even though the Cyrillic spelling is just the Russian transliteration. Codex diagnosed three layered bugs in the alias resolution chain. This fixes all three. Bug 1 — fetch_artist_aliases ignores canonical name + sort-name `core/musicbrainz_service.py:fetch_artist_aliases` only read `data['aliases']`. For artists where MB's canonical `name` IS the cross-script form (and the Latin spelling lives only in aliases — or vice versa), the missing direction never made it into the returned list. Fix: include both `data['name']` and `data['sort-name']` alongside the explicit alias entries (deduped, also pulls each alias entry's sort-name when present). Bug 2 — lookup_artist_aliases ran search in strict mode only Strict mode queries `artist:"..."` only and skips MB's alias and sortname indexes. Cross-script searches found nothing under strict because the user's Latin input never matches a Cyrillic canonical name in the artist index. Fix: lifted the search-and-score logic to a private helper `_search_and_score_artists(name, strict=)` and fall back to non-strict when strict returns empty OR all results fail the trust gate. Non-strict (bare query) hits all indexes. Bug 3 — trust gate weighted local similarity 70% Combined score = local_sim * 0.7 + mb_score/100 * 0.3. Cross-script pairs have local sim ~0 → combined ~0.30 → below the 0.85 threshold → cached as empty even when MB's own confidence was 100. Fix: added an MB-only escape — when MB score is >= 95 AND the result is unambiguous (top result's MB score leads the runner-up by >= 5), accept regardless of local similarity. The existing combined-score path stays intact for same-script matches (#442 Hiroyuki Sawano case still passes via that path). 12 new tests pin every layer: - fetch_artist_aliases canonical-name inclusion + dedup against alias entries + missing-canonical handling + exception path - strict-then-non-strict fallback (empty-strict + low-strict-score) - trust gate MB-only escape + low-confidence rejection + ambiguity rejection (two artists same MB score) + same-script regression - end-to-end reporter scenario with the real `artist_names_match` helper proving the bridge works for "Русская филармония, Дмитрий Яблонский" vs expected "Dmitry Yablonsky" Existing alias tests in `test_artist_alias_service.py` updated to reflect: canonical name now appears in `fetch_artist_aliases` output, lookup makes 2 search calls (strict + non-strict fallback) on first cache miss instead of 1. Full suite: 3065 passed. |
11 hours ago |
|
|
e7ecaca3fd |
Fix MTV Unplugged & live-album false-quarantine pipeline
Closes #589. Tracks from MTV Unplugged / Live At / unplugged albums consistently failed AcoustID verification with "Version mismatch: expected (live) but file is (original)". Two upstream bugs fed into the false positive — the AcoustID gate itself was correctly catching the wrong file Tidal had selected. Codex diagnosed all three layers, this fixes the two upstream causes and leaves the verifier alone. Bug 1 — album-scoped library check false-misses owned albums `core/downloads/master.py:184` scored "Shy Away (MTV Unplugged Live)" (source title from playlist) vs "Shy Away" (local DB stored title) with raw string similarity. Massive length asymmetry → ~0.3 → below the 0.7 threshold → marked missing. Combined with the `allow_duplicates and batch_is_album` short-circuit that disables the global fallback for album downloads, the user's already-owned album re-triggered every track for download. Explains the screenshot showing "0 found / 7 missing" on an album the user manually placed. New pure helper `core/matching/album_context_title.py:strip_redundant_album_suffix` strips trailing parenthetical / bracket / dash suffixes whose tokens are fully subsumed by the album context — at least one version marker (live / unplugged / acoustic / session / concert / tour) overlapping with the album, and every other token is either a known marker, a year, a tolerated noise word, or a word from the album title. Album-context-implied "live" added when the album mentions unplugged / concert / tour / session. Wired into the album-confirmed scope ONLY (not global matching). Compares both raw and normalized source titles per album track and takes the max similarity, so the helper returning the input unchanged (when album doesn't imply version context) preserves the pre-fix behavior. Bug 2 — Tidal qualifier filter only ran on fallback searches `core/tidal_download_client.py:345` set `is_fallback = attempt_idx > 0` and only filtered when `is_fallback and required_qualifiers`. Primary search returned all results unfiltered, so a query for "Shy Away (MTV Unplugged Live)" could accept the studio cut if Tidal happened to rank it first. Now the qualifier filter applies to BOTH primary and fallback search attempts — log message updated to indicate which path triggered. Bug 3 — qualifier check ignored album.name The legacy `_track_name_contains_qualifiers` only inspected the track name. For concert / unplugged releases the live signal typically lives in the album title, not the track title. New `_track_matches_qualifiers` accepts a track object and inspects both `track.name` AND `track.album.name`. Legacy helper preserved to keep its existing test contract. AcoustID version-mismatch gate at core/acoustid_verification.py left intact — it correctly catches genuinely-wrong files that slip through upstream filters. The In My Feelings (Instrumental) test that pins this behavior continues to pass. 19 tests on the album-context helper covering MTV Unplugged variants, dash/parens/brackets suffix shapes, year tolerance, plural-form markers, the implied-live set, anti-regression cases (instrumental/remix on a studio album must NOT be stripped), empty/none defensive paths. 13 tests on the Tidal qualifier helper covering legacy track-name-only behavior preserved, qualifier in track name alone, qualifier in album name alone (the MTV Unplugged scenario), multi-qualifier requirements, no-qualifiers always passes, defensive against missing track.album, word-boundary avoiding substring false-matches, _extract_qualifiers picking up live + unplugged from the user's exact reporter query. Full suite: 3053 passed. |
12 hours ago |
|
|
c9d4b02a02 |
Fix Deezer contributors tagging silently dropping for cache-polluted tracks
Closes #588. Contributing-artist tagging worked for some tracks but silently dropped them for others — most reproducibly when the album had been fetched before the per-track post-process ran. Trace: get_track_details cache check used `track_position in cached` as the "full payload" sentinel. Both `/track/<id>` AND `/album/<id>/tracks` set track_position. Only `/track/<id>` sets the `contributors` array. When album-tracks data hit the cache first, get_track_details returned the partial record → _build_enhanced_track found no contributors → metadata-source contributors-upgrade silently fell back to single-artist. Reporter's case (Andrea Botez - Sacrifice): the album fetch logged "Retrieved 4 tracks for album 673558211" before the post-process, which cached all 4 tracks as partial records. The contributors- upgrade then hit the partial cache and the upgrade log line never fired because len(upgraded) was never > 1. Lifted cache-validity to a pure helper `_is_full_track_payload` that requires BOTH `track_position` AND `contributors` key presence. Empty list `[]` is valid — single-artist tracks fetched via `/track/<id>` carry it explicitly. Partial cache hits fall through to a fresh `/track/<id>` fetch, which writes the full payload back to cache. 11 boundary tests pin every shape: full payload, single-artist with empty contributors list, partial album-tracks shape, search-result shape, none/non-dict, and the cache-hit/cache-miss/api-failure paths on get_track_details (including the exact reporter-scenario regression). Full suite: 3021 passed. |
13 hours ago |
|
|
083355ec8c |
Persist Find & Add selections as permanent server-playlist match overrides
Closes #585. When a Spotify source track had a versioned suffix not present in the local file ("Iron Man - 2012 - Remaster" vs "Iron Man"), the auto-matcher missed the pair. User could click Find & Add to pick the right local file — that worked, file got added to the Plex playlist — but the source track stayed in Missing while the added file appeared in Extra, because the matcher kept no record of the user-confirmed pairing. On the next sync the source track re-tried to download. Fix: every Find & Add selection now writes a (spotify_track_id → server_track_id) override into sync_match_cache at confidence=1.0. The matching algorithm runs an override pass BEFORE the existing exact and fuzzy passes, so any user-confirmed pair short-circuits straight to "matched" without going through title normalization. Covers every mismatch class — dash-suffix remasters, covers / karaoke, alt masters, cross-language titles, typo'd local files. - core/sync/match_overrides.py (new) — pure helpers resolve_match_overrides + record_manual_match. 18 boundary tests pin: cache hits, cache misses falling through to normal matching, stale-cache (server track removed) handled gracefully, str/int id coercion, partial cache hits, defensive against non-dict inputs and DB exceptions. - web_server.py — get_server_playlist_tracks runs the override pre-pass before exact/fuzzy matching. server_playlist_add_track accepts source_track_id + source_title + source_artist and persists the override after every successful add (Plex / Jellyfin / Navidrome). source_track_id added to source_tracks payload so the frontend has it. - webui/static/pages-extra.js — _serverSelectTrack sends source_track_id + source_title + source_artist when adding a track from a mirrored playlist context. - Sync match cache schema unchanged — already had UNIQUE (spotify_track_id, server_source) which fits the override semantics perfectly. Manual overrides distinguished from auto-discovered matches by confidence=1.0. Full suite: 3010 passed. |
15 hours ago |
|
|
f4cff78f13 |
Quarantine management — list, approve, delete, recover
Closes #584. Quarantined files used to sit in ss_quarantine/ with a thin sidecar — no UI, no recovery, no way to see what got dropped. This adds the management surface the user needs without going to the filesystem. UI: new "Quarantine" button on the downloads page header opens a modal with every quarantined file (filename, expected track/artist, reason, when, size). Three actions per row: - Approve (one-click): restores the file, re-runs the post-process pipeline with ONLY the failing check skipped, lands in the library with full tags + lyrics + scan - Recover (legacy fallback): moves to Staging for thin-sidecar entries that lack the embedded context Approve needs - Delete: permanent removal of file + sidecar Per-check bypass: context['_skip_quarantine_check'] = 'integrity' / 'acoustid' / 'bit_depth'. Skips ONLY the named check — other quality gates stay live. No blanket bypass-all flag. Sidecar expansion: move_to_quarantine now persists the full json-serializable context via serialize_quarantine_context (drops non-JSON-safe values, walks nested dicts/lists/sets, str-coerces unknown objects) plus the trigger name. Existing thin sidecars are detected and routed to Recover instead of Approve. Pure helpers in core/imports/quarantine.py: list_quarantine_entries / delete_quarantine_entry / approve_quarantine_entry / recover_to_staging / serialize_quarantine_context. 27 tests pin every shape: orphan files / orphan sidecars / corrupt sidecars / collision-safe filename restoration / full-context vs thin-sidecar dispatch / json round-trip safety. Four new endpoints in web_server.py — thin glue around the helpers: GET /api/quarantine/list, DELETE /api/quarantine/<id>, POST /api/quarantine/<id>/approve, POST /api/quarantine/<id>/recover. Download modal status differentiates "🛡️ Quarantined" from "❌ Failed" so recoverable files are visible at a glance — checked against the error_message text, no schema change needed. Pipeline changes are three minimal per-check conditionals at the existing quarantine sites in core/imports/pipeline.py. Each move_to_quarantine call now passes its trigger name so the sidecar records which check fired. Full suite: 2992 passed. |
16 hours ago |
|
|
177bd85355 |
Configurable duration tolerance for downloaded-file integrity check
Previously hardcoded at 3s (5s for tracks >10min) — files drifting past that got quarantined with no user override. Live recordings, alternate masterings, and some legitimate uploads routinely drift further. New setting `post_processing.duration_tolerance_seconds`. Default 0 means "use auto-scaled defaults" (unchanged behavior for users who don't touch it). Positive value overrides the per-track defaults. Capped at 60s — past that the check is effectively off. Logic lifted to pure helper `resolve_duration_tolerance` in file_integrity.py. Coerces every plausible input (None / empty / zero / negative / unparseable / above-cap / numeric string / float) to either a float override or None for auto. 12 tests pin every shape. Wired into `core/imports/pipeline.py` at the integrity-check call site — runs for ALL matched downloads (Soulseek / Tidal / Qobuz / HiFi / YouTube / Deezer-direct) since they all share that pipeline. Settings UI input under Settings → Metadata → Post-Processing. |
18 hours ago |
|
|
0769fcd5cc |
Fix Soulseek downloads losing collab artist tags
Soulseek matched-download contexts populate `original_search_result` with `artist` (singular string) and no `artists` list — the full multi-artist array lives on `track_info` (the matched Spotify track object). `extract_source_metadata` only read `original_search.artists`, so the Soulseek path always fell through to the single-artist branch and TPE1 ended up with the primary artist only. Deezer-direct downloads were unaffected because their context populates `original_search.artists` as a proper list. Lifted artist resolution into a pure helper `core/metadata/artist_resolution.py:resolve_track_artists` that walks `original_search.artists` → `track_info.artists` → `artist_dict.name` fallback chain. Normalizes mixed list-item shapes (Spotify-style dicts, bare strings, anything else stringified) and drops empty entries. 13 new tests pin the resolution order, fallback chain, mixed-shape normalization, whitespace stripping, and empty/none handling. The existing `_artists_list` no-fall-through test in `test_multi_artist_tag_settings.py` was updated to reflect the new contract (always populated; multi-value write still gated on `len > 1`) plus a new regression test for the Soulseek shape. Composes with the existing Deezer per-track upgrade (still fires when single-artist + track_id available) and feat_in_title / artist_separator settings (still drive the joined ARTIST string downstream). |
1 day ago |
|
|
8a11a660af |
Extract manual import route handlers
Move the remaining manual import endpoint logic out of web_server.py and into core.imports.routes behind ImportRouteRuntime. The Flask endpoints now stay as thin compatibility wrappers for album/track search, album match/process, single-file import processing, and batched singles processing. Keep legacy test patch points intact by re-exporting build_album_import_match_payload from web_server and routing singles_process through an injected process_single_import_file callable. This preserves existing route-level monkeypatch behavior while keeping the extracted helper testable. Add focused helper coverage for Hydrabase enqueueing, search limit clamping, album match payload forwarding, album import side effects, single-file worker outcomes, malformed manual matches, and singles aggregation/injected-worker behavior. Verification: py_compile and git diff --check passed locally; bundled-Python smoke covered the extracted helpers. Claude reran the project tests and reported all tests passing. |
1 day ago |
|
|
d703d33178 |
Extract import staging route helpers
Move import staging files/groups/hints/suggestions controller logic out of web_server.py and into core.imports.routes behind an ImportRouteRuntime dependency object. Keep the existing Flask routes as thin compatibility wrappers so the UI endpoint surface stays unchanged. Add focused tests for staging file filtering, album grouping, hint generation, cached suggestions, empty missing staging paths, and error payloads from failed path/metadata reads. Verification: py_compile passed for web_server.py, core/imports/routes.py, and tests/imports/test_import_routes.py. A bundled-Python smoke pass covered the extracted helper behavior; pytest was not available in this Windows shell because the bundled Python lacks pytest and the repo venv is WSL/Linux-only here. |
1 day ago |
|
|
32bf52cc18
|
Extract WebUI asset helpers
- move Vite manifest handling and SPA route rules into core/webui - keep web_server.py focused on Flask route wiring - add tests for asset rendering and manifest reload behavior - keep image URL normalization coverage alongside the metadata helpers |
2 days ago |
|
|
fdda64963f |
Drop platform-biased trailing-backslash test for derive_artist_folder
POSIX os.path.dirname doesn't treat '\' as separator, so the assertion 'Drake' in result fails on Linux CI even though the function's rstrip removes the trailing backslash correctly. The forward-slash test already covers the trim contract. |
2 days ago |
|
|
89246a7304 |
Write artist.jpg to artist folder so Navidrome shows real photos
Closes #572 (rhwc). Navidrome has no API for setting an artist image — it reads `artist.jpg` (or `folder.jpg`) from the artist folder during library scans. SoulSync's `update_artist_poster` for Navidrome was a no-op, so users only ever saw album-art-derived thumbnails as the artist photo. - new "Write Artist Image" button on artist detail page - POST /api/artist/<id>/write-image-to-disk derives the artist folder from any track's resolved file_path (reuses _resolve_library_file_path so docker mount translation + library.music_paths probes from #558 apply), fetches the photo from the configured metadata source priority chain, downloads with content-type validation, writes atomically via `<filename>.tmp + os.replace` - when active server is Navidrome, triggers a library scan immediately so the file is picked up - respects existing artist.jpg (frontend prompts before overwriting) so user-supplied photos aren't clobbered - works for plex / jellyfin too as a fallback layer — both servers also read artist.jpg from disk 26 tests pin the pure helpers in core/library/artist_image.py: folder derivation (trailing sep / empty / non-string), URL picking (missing attr / whitespace / non-string), download (non-image content-type / 404 / timeout / empty body), atomic write (replace / temp-cleanup-on-failure / overwrite guard / missing folder). |
2 days ago |
|
|
6ce185491d |
Add per-download Audit Trail modal to Library History
- new "Audit" button on each download row in the library history modal opens a second modal visualizing the download lifecycle as an interactive horizontal stepper (request → source → match → verify → process → place) with click-to-expand detail cards - hero header with album art + track title + meta line + status pills (source / quality / acoustid result) - three tabs: Lifecycle / Tags / Lyrics - Tags tab reads the audio file live via mutagen at audit-open time via new GET /api/library/history/<id>/file-tags endpoint; file is the single source of truth so background enrichment writes (audiodb / lastfm / genius / replaygain / lyrics fetch) show up too. flat key/value rows stacked vertically (label-above- value) so long MBIDs / URLs / joined genre lists wrap cleanly. source IDs grouped per-service into 2-col sub-card grid. - Lyrics tab renders the full transcript with dimmed timecodes. - post-processing step infers observable changes from source-vs- final state (format conversion, file rename via tag template, folder template). - "Download History" button also added to the Downloads page batch panel header so it's reachable outside the dashboard. - mobile responsive: tabs + stepper scroll horizontally, modal goes full-screen, hero stacks below 480px. 19 helper tests pin the mutagen reader: id3 (TIT2/TPE1/TALB + TXXX + USLT + APIC), vorbis (FLAC dict + _id/_url passthrough), file metadata (format / bitrate / duration), defensive paths (empty / missing file / mutagen returns None / mutagen raises), stringify edge cases (list / tuple / int / frame-with-text / whitespace). |
2 days ago |
|
|
46206b3240 |
Pin type='track' / type='artist' collision case for album-type normalizer
|
2 days ago |
|
|
5eae24b8bb |
Fix $albumtype defaulting to album for non-Spotify sources
- legacy duck-typed builder only checked the `album_type` key; deezer uses `record_type`, tidal uses `type` (uppercase), some flattened musicbrainz shapes use `primary-type` — all defaulted to album, so EPs and singles ended up filed under Album/ in user templates that reference $albumtype - widen lookup to album_type / record_type / type / primary-type and route through new pure `_normalize_album_type` helper that case-folds + validates against the canonical token set (album / single / ep / compilation), unknown → album - typed-converter path (spotify / deezer / itunes / discogs / mb / hydrabase / qobuz) unchanged — those were already correct Discord report (CAL). |
2 days ago |
|
|
b9feed1a67 |
Add min delay between slskd searches (Bell Canada anti-abuse fix)
- new soulseek.search_min_delay_seconds knob forces a gap between consecutive searches; smooths the burst pattern that trips ISP anti-abuse (Reddit report: Bell Canada cuts the WAN after rapid peer-connection spikes) even when the existing 35/220 sliding-window cap isn't hit - throttle math lifted to a pure compute_search_wait_seconds helper so the gate logic is testable independent of asyncio.sleep + the singleton client - new field on settings → connections → soulseek; default 0 = disabled so existing users see no change 15 helper-boundary tests pin defaults / no-throttle, sliding-window cap (legacy), min-delay (the new burst-smoother), max-of-both gates, and defensive paths. |
2 days ago |
|
|
6233860d66 |
Fix Copy Debug Info music_source + surface missing services
- music_source / spotify_connected / spotify_rate_limited were reading a non-existent 'spotify' key on _status_cache and silently falling through to the missing-value default (always 'unknown' / False). Routed through the canonical accessors get_primary_source + get_spotify_status now. - added hydrabase_connected, youtube_available, hifi_instance_count, and always_available_metadata_sources so the debug dump reflects the full service surface - removed a local re-import of get_spotify_status that was making python 3.12 treat the name as function-scoped, breaking the new lambda above it (NameError on free variable) — module-level import already exists 11 endpoint-level tests pin music_source / spotify_* / hydrabase_* / youtube_available / always_available_metadata_sources / hifi_instance_count and the defensive fall-through paths when each lookup raises. |
2 days ago |
|
|
4892baf8d4 |
Skip already-owned tracks during download discography
- new track_already_owned helper wraps db.check_track_exists at the same confidence threshold the discography backfill repair job uses (0.7) — name+artist+album, format-agnostic so blasphemy-mode libraries (flac → mp3 + delete original) match correctly - endpoint runs the check after the artist + content-type filters and before add_to_wishlist, so a second discography click on the same artist no longer re-queues every track that already downloaded - per-album response carries a new tracks_skipped_owned counter alongside the existing artist/content/wishlist skip categories Discord report (Skowl). |
2 days ago |
|
|
d4ad5bf57f |
Filter cross-artist + content-type tracks during download discography
- drop tracks where the requested artist isn't named in track.artists (keeps features, drops compilation / appears_on contamination) - honor watchlist.global_include_live/remixes/acoustic/instrumentals the same way the discography backfill repair job already does - surface per-album skip counts in the ndjson stream (artist mismatch + content filter) so the ui can show what was filtered Closes #559. |
2 days ago |
|
|
56ae10693b |
Album Completeness: surface diagnostic when resolver can't find album folder
GitHub issue #558: clicking Auto-Fill / Fix Selected on the Album Completeness findings page returned a flat "Could not determine album folder from existing tracks" error with no diagnostic. Reporter is on Navidrome on Docker — the path resolver in `core/library/path_resolver.py` couldn't find any of the album's tracks on disk because Navidrome's Subsonic API doesn't expose filesystem library paths the way Plex's API does (probed in #476). Default settings → `library.music_paths` empty → no base directories to probe → silent None. User had no signal about what to configure. Not a regression of #476 — that fix targeted Plex auto-discovery and worked correctly for it. Navidrome was never covered because the protocol gives the resolver nothing to probe. Fix scoped to the diagnostic surface, not auto-magic discovery: - Added `resolve_library_file_path_with_diagnostic` returning `(resolved, ResolveAttempt)`. ResolveAttempt records what the resolver tried — `raw_path_existed`, `base_dirs_tried`, `had_config_manager`, `had_plex_client`. Pure data, no rendering opinions. - Legacy `resolve_library_file_path` becomes a thin wrapper that drops the attempt; every existing call site is unchanged. - `RepairWorker._fix_incomplete_album` now uses the diagnostic helper and renders a multi-part error via `_build_unresolvable_album_folder_error`: names the active media server, shows one sample DB-recorded path, lists every base directory the resolver actually probed, and points the user at Settings → Library → Music Paths as the actionable fix. - Distinguishes empty-base-dirs vs tried-and-failed cases so the user knows whether to add a mount or fix the existing one. - No auto-probing of common Docker conventions (`/music`, `/media`, etc). Speculative — could resolve to wrong dirs on the suffix-walk if a conventional path happens to contain a partial collision. User stays in control. 12 new tests: - 7 in `tests/library/test_path_resolver.py`: tuple-shape contract, raw-path-existed short-circuit, base-dirs listed even on walk failure, had-flags reflect caller inputs, no-base-dirs returns None with empty attempt, legacy `resolve_library_file_path` delegates correctly across happy / suffix-walk / failure paths. - 8 in `tests/test_repair_worker_unresolvable_folder_error.py`: active server name in error, sample DB path verbatim, base dirs listed, empty-base-dirs phrased differently, Settings hint always present, defensive against None attempt / missing sample / missing config_manager. Full pytest sweep: 2774 passed. |
2 days ago |
|
|
698ecc99f0 |
Import history: Clear History button now sweeps stuck 'processing' rows
Reported: Clear History button on the Import page left zombie rows behind. Every survivor showed "⧗ Processing" status from 2-9 days ago. Trace: `_record_in_progress` inserts a `status='processing'` row up-front so the UI can render the in-flight import while it runs; `_finalize_result` updates it to `completed`/`failed` when the import finishes. When the worker is killed mid-import (server restart, crash), the row never gets finalized — stays at `processing` forever. The clear-history endpoint's SQL `DELETE ... WHERE status IN (...)` listed every terminal status but omitted `processing`, so zombies survived every click. Fix: add `processing` to the delete list, but guard against nuking genuinely-live imports by intersecting against the worker's `_snapshot_active()` map — any folder hash currently registered in `_active_imports` is excluded from the delete via an `AND folder_hash NOT IN (...)` clause. `pending_review` deliberately left out so user still has to approve/reject those explicitly. One endpoint touched (`/api/auto-import/clear-completed` in web_server.py). No worker changes — guard reuses the existing `_snapshot_active()` method that the UI poller already calls. 5 new tests in `tests/imports/test_auto_import_clear_completed_endpoint.py`: - Zombie `processing` rows swept, live `processing` row preserved (folder_hash currently in `_active_imports` survives) - Response count matches actual delete count - Empty active-set branch (unparameterized DELETE) — pinned because an empty SQL `IN ()` would be a syntax error - Worker-unavailable returns 500 (pre-existing guard not regressed) - `pending_review` rows always survive — never auto-swept Full pytest sweep: 2758 passed (one pre-existing flaky timing test on `test_import_singles_parallel.py` failed under full-suite CPU load, passes in isolation in 2.95s — unrelated to this change). |
2 days ago |
|
|
3af2d34cee |
Auto-import: fall through to other metadata sources when primary returns no match
Discord report: 16 Bandcamp indie albums sat in staging because auto-import couldn't identify them, but the manual search bar at the bottom of the Import Music tab found the same albums fine. Trace: `_search_metadata_source` only queried `get_primary_source()` — single source, no fallback. Meanwhile `search_import_albums` (manual search bar) already iterated `get_source_priority(get_primary_source())` and broke on the first source with results. Asymmetric behavior, same album: manual worked, auto-import didn't. Fix: lift `_search_metadata_source` to use the same source-chain pattern. Try primary first; if it returns nothing OR scores below the 0.4 threshold, fall through to the next source in priority order. First source producing a strong-enough match wins. Result dict carries the `source` that actually matched (not the primary name) so downstream `_match_tracks` calls the right client. Defensive per-source try/except so a rate-limited or auth-failed source doesn't abort the chain. Unconfigured sources (client=None) silently skipped. Cin-shape lift: scoring math extracted to pure `_score_album_search_result` helper so the weight tweaks (album 50% / artist 20% / track-count 30%) are pinned at the function boundary, independent of the orchestrator (per-source iteration, exception containment, threshold check). Weight constants exposed at module level (`_ALBUM_NAME_WEIGHT`, `_ARTIST_NAME_WEIGHT`, `_TRACK_COUNT_WEIGHT`) — greppable, bumpable in one place. Pre-extraction these were magic numbers inline. 27 new tests: - 9 integration tests in `test_auto_import_multi_source_fallback.py`: primary-success path unchanged (no fallback fires, only primary client called), primary-empty falls through, primary-weak-score falls through, first fallback success stops the chain (no wasted API calls on remaining sources), all-sources-fail returns None, per-source exception contained, unconfigured-source skipped, result `source` field reflects winning source, `identification_confidence` from winning source. - 18 helper tests in `test_album_search_scoring.py`: weights sum to 1.0, album weight dominant (invariant pin), perfect-match returns 1.0, per-component contribution (album / artist / track-count), Bandcamp vs streaming track-count mismatch (7-files vs 4-tracks case still scores ~0.87 above threshold), zero-track-count and zero-file guards, huge-mismatch non-negative guard, list-of-strings artist shape, missing `.name` / `.artists` / `None` total_tracks edge cases. Backwards compatible: single-source users see no change — chain just has one entry. Existing test `test_search_metadata_source_extracts_artist_id_from_dict_artist` needed one extra patch line for `get_source_priority`. Full pytest sweep: 2754 passed. |
2 days ago |
|
|
d5de724f9b |
Multi-artist Deezer upgrade + double-append guard hardening
Two follow-ups to the multi-artist tag settings PR:
1. Deezer contributors upgrade — closes the "known limitation"
flagged in the prior commit. Deezer's `/search` endpoint only
returns the primary artist for each track; the full contributors
array (feat., remix collaborators, producers credited as artists)
lives on `/track/<id>` and gets parsed by `_build_enhanced_track`.
Without the upgrade Deezer-sourced tracks never got multi-artist
tags even with the right settings on.
Fix in `core/metadata/source.py`: when source==deezer AND the
search response had a single artist AND a track_id is available,
fetch full track details via `get_deezer_client().get_track_details`
and replace `all_artists` with the upgraded list.
- One extra API call per affected Deezer track
- Skipped when search already returned multiple (no-op fast path)
- Skipped for non-Deezer sources (Spotify/Tidal/iTunes search
responses already include all artists)
- Skipped when no track_id is available
- Defensive try/except: on /track/<id> failure (network error,
deezer client unavailable), fall through to the search-result
list — never lose the data we already had
2. Double-append guard hardened with a word-boundary regex.
Prior commit checked for `"feat." not in title.lower() and "(ft."
not in title.lower()` — too narrow. Source platforms produce
wildly different feat-marker conventions: "(feat. X)", "(Feat X)",
"(FEAT X)", "(Featuring X)", "[feat. X]", "ft. X" (no parens),
"FT. X", etc. Any of these as the SOURCE title would cause a
double-append: `"Track (Feat X) (feat. Y)"`.
Replaced with `re.search(r'\b(?:feat|feat\.|featuring|ft|ft\.)\b',
title, IGNORECASE)`. Word-boundary regex catches every common
variant. Substring matches like "Aftermath" containing `ft`
correctly fall through to the append path (pinned by a regression
test).
16 new tests (29 total in the file):
- 9 parametrized variants of the double-append guard
- 1 substring guard ("Aftermath")
- 6 Deezer upgrade scenarios (fires when expected, doesn't fire
for non-Deezer / multi-artist search / no track_id, defensive
fall-through on failure, no false-positive when /track/<id>
confirms single artist)
Full pytest 2727 passed.
|
3 days ago |
|
|
c11a5b7eab |
Multi-artist tag settings: implement artist_separator + feat_in_title + populate _artists_list
Three settings on Settings → Metadata → Tags were partially or
completely unimplemented. Reporter (Netti93) traced each one.
(1) `write_multi_artist` only "worked" because of a never-populated
`_artists_list` field. `core/metadata/source.py` built
`metadata["artist"]` as a hardcoded ", "-joined string but never
assigned `metadata["_artists_list"]`. `core/metadata/enrichment.py`
line 107 reads that field and gates the multi-value tag write
on `len(_artists_list) > 1` — always saw an empty list, silently
no-op'd the write.
(2) `artist_separator` (default ", ") was referenced in the UI +
settings.js save path but ZERO Python code read the value. Every
multi-artist track ended up with hardcoded ", " regardless of
what the user picked.
(3) `feat_in_title` (when true: pull featured artists into the title
as " (feat. X, Y)" and leave only primary in the ARTIST tag —
Picard convention) had no implementation at all.
Fix in source.py:
* Populate `_artists_list` from the search response's artists array
* Read `feat_in_title` and `artist_separator` configs
* When `feat_in_title=True` and >1 artist: ARTIST = primary only,
append "(feat. X, Y)" to title with double-append guard
* Else: ARTIST = artists joined with `artist_separator`
* Single-artist case unaffected by either setting
Double-append guard uses a word-boundary regex catching all common
"feat" variants source platforms produce — `feat`, `feat.`,
`featuring`, `ft`, `ft.` — case-insensitive. Substring matches
(e.g. "Aftermath" containing "ft") correctly fall through to the
append path.
Fix in enrichment.py ID3 branch:
* TPE1 stays as the display string (with separator or primary-only
per the user's settings)
* Multi-value list goes to a separate `TXXX:Artists` frame (Picard
convention) when `write_multi_artist` is on
* Pre-fix the ID3 path wrote TPE1 twice — single-string then list
— and the second `add` overwrote the first, clobbering both the
configured separator AND the feat_in_title semantics. Vorbis path
was already correct (separate "artist" + "artists" keys).
Known limitation (flagged in WHATS_NEW): Deezer's `/search` endpoint
only returns the primary artist. The full contributors array lives
on `/track/<id>`. Enrichment uses search-result data so Deezer-
sourced tracks may still get only the primary artist until a follow-
up commit wires the per-track contributors fetch into the enrichment
flow. Spotify, Tidal, and iTunes search responses include all
artists so they work now.
23 new tests in `tests/metadata/test_multi_artist_tag_settings.py`:
* `_artists_list` populated for multi/single/no-artist cases
* `artist_separator` drives ARTIST string (default ", " + custom
";" + custom "; " + " & ")
* Single-artist case unaffected by either setting
* `feat_in_title=True` pulls featured to title, leaves primary in
ARTIST
* `feat_in_title` no-op for single artist
* Double-append guard recognizes 9 source-title variants ("(feat.
X)", "(Feat. X)", "(FEAT X)", "(feat X)", "(Featuring X)",
"[feat. X]", "ft. X", "(ft X)", "FT. X")
* Substring guard test pins "Aftermath" doesn't false-positive
* Combined-settings precedence: feat_in_title wins ARTIST string
but `_artists_list` carries everyone for multi-value tag
Full pytest 2711 passed.
|
3 days ago |
|
|
fc573a5f19 |
AudioDB worker: stop infinite loop on direct-ID lookup failure (#553)
Track enrichment was stuck in a constant retry loop. Logs showed
nothing but `Read timed out. (read timeout=10)` from
`lookup_track_by_id` repeating against the same track ID. AudioDB
itself was being hammered nonstop with no progress.
Cause: when an entity already has `audiodb_id` populated (from a
manual match or earlier scan) but `audiodb_match_status` is still
NULL — an inconsistent state some import paths can leave behind —
the worker tries a direct ID lookup. If that lookup fails (returns
None on timeout, which AudioDB's `track.php` endpoint hits
frequently because it's slow), the prior code logged "preserving
manual match" and returned WITHOUT marking status. Row stayed NULL
→ queue's NULL-status filter picked it up next tick → tried direct
lookup → timed out → returned → infinite loop.
The "preserve manual match" intent was correct: don't fall through
to the name-search path because that could overwrite a manually-set
`audiodb_id` with a wrong guess. Bug was the missing `_mark_status`
call before the early return.
Fix:
* `_process_item` direct-lookup-failure branch now calls
`_mark_status(item_type, item_id, 'error')` before returning. The
existing `audiodb_id` is preserved (column not touched). Queue's
NULL-status filter no longer re-picks the row.
* `_get_next_item` retry-cutoff queue priorities (4/5/6) extended
from `audiodb_match_status = 'not_found'` to
`audiodb_match_status IN ('not_found', 'error')`. Same `retry_days`
window. Transient AudioDB outages still recover automatically;
permanently-broken IDs eventually get re-attempted once a month
rather than staying errored forever.
5 new tests in `tests/test_audiodb_worker_stuck_track.py` use a real
SQLite DB (not mocks) so the SQL queries are actually exercised:
- lookup-returns-None marks status='error' (no infinite loop)
- lookup-raises-exception marks status='error' (defensive)
- lookup-success preserves the existing match-success path
- error-status row past retry-cutoff gets picked up again
- error-status row within cutoff stays skipped (loop prevention
works)
Only triggers for entities in the inconsistent `audiodb_id` set +
`match_status` NULL state. Happy path and already-matched /
already-not-found rows unchanged. Full pytest 2698 passed.
Closes #553.
|
3 days ago |
|
|
e3a4b513fd
|
Merge pull request #538 from kettui/fix/repair-worker-server-source
Preserve server source during album fill |
3 days ago |
|
|
4fb9f38798 |
Your Albums: selectable wishlist modal + Tidal album resolution
Two-part fix to the Your Albums "Download Missing" flow on Discover. Part A — UX redesign The prior `downloadMissingYourAlbums()` ran a per-album loop that fired direct-download tasks via `openDownloadMissingModalForYouTube`. Reported as silently failing — "Queuing 2/2" toast with no actual transfer activity. Even when downloads worked, bypassing the wishlist meant no retry / dedup / rate-limit / source-fallback handling. Replaced with a selectable-grid modal mirroring the Download Discography pattern from the library page. Click the download button → opens a checkbox grid showing every missing album (cover, title, artist, year, track count, source) → user picks what they actually want → click "Add to Wishlist" → each album's tracks get resolved + queued through the existing wishlist auto-download processor. NDJSON progress stream renders ✓/✗ per album. New JS helpers: - `_openYourAlbumsBatchModal(missingAlbums)` — builds the modal - `_renderYourAlbumsBatchCard(row, index)` — per-album card - `_yourAlbumsBatchSelectAll(select)` — bulk toggle - `_updateYourAlbumsBatchFooterCount()` — live count + button text - `_closeYourAlbumsBatchModal()` — overlay teardown - `_startYourAlbumsBatchAddToWishlist()` — submit handler, NDJSON progress consumer - `_yourAlbumsPickSource(album)` — picks the single best source-id per row (priority: spotify → deezer → tidal → discogs) Reuses the `.discog-*` CSS classes from the library Download Discography modal — no new CSS. Reuses the existing `/api/artist/<id>/download-discography` endpoint. The endpoint's URL artist_id param is functionally unused (per-album payload carries everything — verified by reading the endpoint body), so the modal posts with placeholder `your-albums` and gets multi-artist resolution for free without backend changes. Part B — Tidal album resolution Reported as the original bug: clicking download on Tidal-only albums did nothing because `/api/discover/album/<source>/<album_id>` had no `tidal` branch and `tidal_client` had no `get_album_tracks` method. `core/tidal_client.py`: new `get_album_tracks(album_id, limit=None)` method. Two-phase: cursor-walk `/v2/albums/<id>/relationships/items?include=items` for track refs + position metadata (`meta.trackNumber` + `meta.volumeNumber`), batch-hydrate via existing `_get_tracks_batch` for artist/album names. Returns `Track` objects with `track_number` and `disc_number` attached. Sort by (disc, track) so multi-disc compilations render in album order. `web_server.py`: new `'tidal'` source branch in `/api/discover/album/<source>/<album_id>`. Resolves album metadata via `get_album`, tracks via `get_album_tracks`, cover art via inline `?include=coverArt` lookup. Same response shape as Spotify/Deezer branches. `webui/static/discover.js`: - `tidal_album_id` added to `trySources` for the single-album click flow (`openYourAlbumDownload`) - Same source picker drives the new batch modal - Virtual-id generation includes `tidal_album_id` so Tidal-only albums get stable identifiers across discover-album-* / your- albums-* contexts 10 new tests in `tests/test_tidal_album_tracks.py` pin: - Single-page walk + hydration - Multi-page cursor chain - Multi-disc sort order (disc 1 → 2 in track order each) - `limit` short-circuit at page boundary - No-token short-circuit (no API call) - HTTP error returns empty - 429 raises (propagates to `rate_limited` decorator for retry) - Forward-compat type filter (skips non-track entries) - Partial-batch hydration failure containment - Empty-album short-circuit (no batch call) Full pytest: 2693 passed. |
3 days ago |
|
|
7a23d60f28 |
AcoustID scanner: file-tag fallback for legacy compilation tracks
Follow-up to the prior compilation-album scanner fix. That patch
made the scanner read `tracks.track_artist` (per-track artist
column) via COALESCE so compilation tracks would compare against
the right value. But tracks downloaded BEFORE the `track_artist`
column existed have track_artist=NULL — COALESCE falls back to
album artist (the curator) and the wrong-comparison case returns.
Fix: explicit 3-tier resolution in `_scan_file`:
1. DB `tracks.track_artist` if populated → trust it. Respects
manual edits from the enhanced library view (user who curated
the DB value but didn't re-tag the file gets their edit
respected, not overridden by stale file tag).
2. File's ARTIST tag via mutagen if present → use it. Tidal /
Spotify / Deezer all write the per-track artist into the
audio file at download time regardless of SoulSync's DB
schema, so it's ground truth even when the DB column is
stale or NULL. File is already open for fingerprinting so
mutagen tag-read is essentially free.
3. Album artist → final fallback for files without proper ARTIST
tags AND no DB track_artist. Existing pre-fix behavior.
`_load_db_tracks` SELECT now surfaces `track_artist` (raw, may be
empty/NULL via NULLIF) and `album_artist` separately in addition
to the COALESCE'd `artist` field — so `_scan_file` can tell the
difference between 'DB has a curated value' and 'DB fell back to
album artist'. Without this distinction, the file-tag fallback
would create false positives when DB is curated but file is stale.
5 new tests (11 total in the file) pin:
- File-tag-trumps-DB resolves the legacy NULL case (DB says
'Andromedik' (album curator), file says 'Eclypse', AcoustID
says 'Eclypse' → no flag)
- Tag-missing falls back to album artist (preserves existing
genuine-mismatch contract — file without tag + AcoustID
mismatch still flags)
- Mutagen exception swallowed (debug log, fall-through)
- File-tag matches DB → no behavioral change
- DB curated value trumps stale file tag (false-positive guard
— user edited DB without re-tagging file shouldn't get flagged)
Two existing test fixtures (`_make_context` callers) updated to
the new 10-column row shape.
SQL behavior verified empirically against real SQLite: NULL and
empty-string both flow through NULLIF → None in Python →
file-tag-fallback path. Modern populated values trump file tag.
|
4 days ago |
|
|
f4c433c151 |
Tidal: rewire favorite albums + artists to V2 user-collection endpoints
Discord: Discover → Your Albums (and Your Artists) was returning nothing for Tidal users regardless of how many albums/artists they'd favorited. Audit found `get_favorite_albums` and `get_favorite_artists` called the deprecated `/v2/favorites?filter[type]=ALBUMS|ARTISTS` endpoint — that endpoint returns 404 for personal favorites because it's scoped to collections the third-party app created itself. The V1 fallback (`/v1/users/<id>/favorites/...`) is also dead because modern OAuth tokens carry `collection.read` instead of the legacy `r_usr` scope V1 demands (returns 403). Same root cause as the favorited-tracks fix from #502. Fix: rewire to the working V2 user-collection endpoints — `/v2/userCollectionAlbums/me/relationships/items` and `/v2/userCollectionArtists/me/relationships/items` — using the same cursor-paginated pattern shipped for tracks. Architecture: * ID enumeration lifted into a generic `_iter_collection_resource_ids(path, expected_type, max_ids)` helper so tracks / albums / artists all share one walker. Three thin wrappers preserve the per-resource public surface (`_iter_collection_track_ids`, `_iter_collection_album_ids`, `_iter_collection_artist_ids`). Net deduped ~80 lines that would otherwise be three near-identical copies. * Batch hydration via `/v2/{albums|artists}?filter[id]=...&include=...` with extended JSON:API include semantics. One request returns up to 20 albums + their artists + cover artworks all in `included[]` (or 20 artists + their profile artworks). Three static helpers parse the response: - `_build_included_maps(included)` → indexes the array by type so per-resource lookup is O(1) per relationship ref - `_first_artist_name(rels, artists_map)` → resolves primary artist from relationships block; '' on missing/unknown - `_first_artwork_url(rel, artworks_map)` → picks `files[0]` (Tidal returns artwork files largest-first, so this gets the highest-resolution variant — typically 1280×1280) * Public methods (`get_favorite_albums`, `get_favorite_artists`) preserve the prior return shape — list of dicts matching what `database.upsert_liked_album` / `upsert_liked_artist` consume — so the discover aggregator path in `web_server.py` stays byte-identical. No caller changes needed. * Deleted ~240 lines of dead code: the V2-favorites paths AND the V1 fallback paths from the old method bodies. Both are dead against modern OAuth tokens. 24 new tests in `tests/test_tidal_favorite_albums_artists.py` pin: * Cursor-walker dispatch (album/artist iters pass correct path + expected_type to the generic walker) * Included-map building (groups by type, skips items missing id) * Artist + artwork relationship resolution (full + missing rels + unknown id + no files cases) * Batch hydration parse for albums (full attributes, missing relationships fall through to defaults, type-filter excludes non-album entries, `filter[id]` param is comma-joined) * Batch hydration parse for artists (same shape coverage) * End-to-end orchestrator behavior (walk → batch → return, empty-input short-circuits without API call, BATCH_SIZE chunking on 41 IDs → 20/20/1, exception-from-iter returns []) Endpoint paths empirically verified against live Tidal API: `userCollectionArtists/me/relationships/items` returned 200 + 5 real artist refs for the test account. `userCollectionAlbums/...` returned 200 + empty (account has 0 album favorites currently) but the response shape is correct. The deprecated `/v2/favorites?filter[type]=ALBUMS` returned 404. The V1 `/v1/users/<id>/favorites/albums` returned 403 with explicit "Token is missing required scope. Required scopes: r_usr" message. WHATS_NEW entry under existing '2.5.1' block. Full pytest: 2678 passed. |
4 days ago |
|
|
6fe85f2f37 |
Server playlist sync: append mode (preserve user-added tracks)
Discord report (CJFC, 2026-04-26): syncing a Spotify playlist to the
server overwrote anything manually added to the server-side playlist.
The fix adds a per-sync mode picker next to the Sync button on the
playlist details modal — Replace (default, current delete-recreate
behavior) or Append only (preserves existing tracks, only adds new
ones). Useful when the source platform caps playlist size and the
user is manually building beyond it on the server.
Implementation:
* New `append_to_playlist(name, tracks)` method on Plex / Jellyfin /
Navidrome clients. Each uses the server's NATIVE append API:
- Plex: `existing_playlist.addItems(new_tracks)`
- Jellyfin: `POST /Playlists/<id>/Items?Ids=...&UserId=...`
- Navidrome: Subsonic `updatePlaylist?songIdToAdd=...`
Falls back to `create_playlist` when the playlist doesn't exist
yet (first sync). No delete-recreate, no backup playlist created
(preserves playlist creation date + metadata + non-soulsync-managed
tracks).
* Dedup-by-server-native-id (ratingKey for Plex, GUID for Jellyfin,
song-id for Navidrome) — never re-adds a track already on the
playlist. Server-native identity, not fuzzy title+artist match,
so it can't false-collide.
* `sync_service.sync_playlist` accepts `sync_mode='replace'|'append'`
kwarg. Single if/else branch dispatches to `append_to_playlist` or
`update_playlist`. Threaded through `core/discovery/sync.run_sync_task`
and the `/api/sync/start` HTTP handler. Validation on the API rejects
unknown mode strings (defaults to 'replace').
* Frontend: per-playlist `<select id="sync-mode-${id}">` rendered next
to the Sync button in both modal renderers (sync-spotify.js for
Spotify playlists, sync-services.js for Deezer ARL playlists).
`startPlaylistSync` reads the select at click time; missing select
(other callers like discover.js) defaults to 'replace' so backward
compat preserved without per-call-site updates.
* SoulSync standalone has no playlist methods at all and the modal
hides the Sync button entirely on it via `_isSoulsyncStandalone` —
dispatch never reaches that path, no defensive fallback needed.
15 new tests pin per-server append behavior:
- missing playlist → create_playlist delegation
- dedup filtering (existing IDs skipped, only new tracks added)
- empty new-track set short-circuits without API call
- failure paths return False without raising
- contract listing (KNOWN_PER_SERVER_METHODS includes
'append_to_playlist'; Plex / Jellyfin / Navidrome all implement)
Plus tests/discovery/test_discovery_sync.py fake `sync_playlist`
fixture got `sync_mode='replace'` default to match the new signature
(was breaking after the kwarg add; now passing).
WHATS_NEW entry under new '2.6.0' block (hidden by
`_getLatestWhatsNewVersion` until next release bump).
Closes CJFC discord request.
|
4 days ago |
|
|
f28f9808db |
Tidal: surface Favorite Tracks as virtual playlist (issue #502)
Adds the user's Tidal favorited tracks ("My Collection" in the Tidal
app) as a virtual playlist alongside their real playlists, mirroring
how Spotify's "Liked Songs" is treated.
Reporter (yug1900) located the working endpoint after the prior
`/v2/favorites?filter[type]=TRACKS` attempt returned empty data —
that endpoint is scoped to collections the third-party app created
itself, not personal favorites. Real endpoint:
GET /v2/userCollectionTracks/me/relationships/items
?countryCode=US&locale=en-US&include=items
Cursor-paginated (20 per page, follow `links.next` with
`page[cursor]=...` until exhausted). Response only carries
track-level attributes — artist + album NAMES come back as
relationship-link stubs, not embedded data.
Implementation:
* Two-phase fetch — `_iter_collection_track_ids` walks the cursor
chain to enumerate every track id (cheap, IDs only), then
`get_collection_tracks` batch-hydrates 20 IDs at a time through
the existing `_get_tracks_batch` helper which already knows how
to `include=artists,albums`. No duplication of the JSON:API
artist/album parse, no new dataclass shape.
* Virtual playlist `tidal-favorites` appended to the end of
`/api/tidal/playlists`. ID intentionally has no colon —
sync-services.js renderer interpolates IDs into CSS selectors
via template literals (`#tidal-card-${p.id} .foo`) and a `:`
would parse as a CSS pseudo-class operator.
* `tidal_client.get_playlist("tidal-favorites")` recognizes the
virtual id and dispatches to the collection path internally, so
every per-id consumer gets it for free: detail endpoint, mirror
auto-refresh automation, "build Spotify discovery from Tidal
playlist" flow.
OAuth scope expansion:
* Added `collection.read` to both OAuth flows (the
`core/tidal_client.py::authenticate` standalone path AND the
`web_server.py::auth_tidal` web flow — they were independent
scope strings that both needed updating).
* Added `prompt=consent` to both flows — without it Tidal silently
returns a token carrying only the ORIGINAL scope set even after
re-authentication, because Tidal treats the existing
authorization as still valid.
* New `disconnect()` method + `POST /api/tidal/disconnect`
endpoint + Disconnect button next to Authenticate in Settings →
Connections → Tidal — required for users whose existing token
predates the scope expansion (forces a clean grant).
Reconnect-needed UI hint:
* `_collection_needs_reconnect` flag set on 401/403 from the
collection endpoint, cleared on next successful walk, NOT set
on 5xx (transient server errors must not falsely tell the user
to reconnect).
* Listing endpoint reads the flag and surfaces a placeholder card
titled "Favorite Tracks (reconnect Tidal to enable)" with a
description pointing at Settings, so the user has something
visible to act on instead of a silently missing row.
Diagnostic logging — collection request URL + response status +
first 300 bytes of body now logged at info level so future "why
is my collection empty" reports can be diagnosed from app.log
without needing live reproduction.
22 new tests pin: cursor walk (full chain, max-ids cap mid-page +
at page boundary), auth gates (no token / 401 / 403 all bail
clean), reconnect-flag lifecycle (set on 401/403, cleared on next
successful walk, NOT set on 5xx), forward-compat type filter
(non-track entries skipped), count helper, batch hydration
delegation + chunking at the 20-per-batch cap, partial-batch
failure containment, virtual-id dispatch (real playlist ids still
flow through the normal path).
Closes #502.
|
4 days ago |
|
|
b5b6673216 |
Reorganize: hint at Unknown Artist Fixer for placeholder-metadata rows
Phase B of foxxify discord report. Pre-#524 manual-import bug left
some albums in the library with `artist=Unknown Artist` and `album.title
= <numeric album_id>`. Reorganize couldn't place them (no usable
metadata source ID) and emitted a generic "run enrichment first" hint
that doesn't apply — enrichment can't fix these rows. The right tool
is the existing `Fix Unknown Artists` repair job (reads file tags,
re-resolves metadata, re-tags + moves files).
Discoverability gap, not a logic gap. Reorganize now detects the bad-
metadata shape (Unknown Artist OR album.title that's a 6+ digit
numeric id) and emits a clear "run the Fix Unknown Artists repair
job" hint at both reason-emit sites (planner + executor). No
duplication of fixer logic.
WHATS_NEW entry covers both Phase A (orphan-format sibling handling,
already committed in
|
4 days ago |
|
|
d944a166f8 |
Reorganize: move orphan-format siblings alongside the canonical
Discord report (Foxxify): users with the lossy-copy feature enabled have `track.flac` AND `track.opus` side-by-side in their library. Reorganize is DB-driven and only knows about ONE file per track (the lossy copy). The other format used to get left behind in the old location while the canonical moved to its new destination. Empty-folder cleanup never fired because the source dir still had audio. # What was happening 1. User downloads album → SoulSync transcodes `.flac` → `.opus`, embeds `.lrc` lyrics 2. DB row points at `.opus` (the lossy library copy) 3. User runs Library Reorganize 4. Reorganize moves `.opus` to new template path → `Artist/Album/01 Track.opus` 5. `.flac` orphan stays at old location, `.lrc` follows `.opus` 6. Source dir still has the `.flac` → cleanup skips → empty folders pile up # Fix `_finalize_track` now finds sibling-stem audio files at the source BEFORE removing the canonical and moves them to the same destination dir, preserving both formats with the canonical's renamed stem. Two new helpers in `core/library_reorganize.py`: - `_find_sibling_audio_files(audio_path) -> list[str]` — returns paths to other audio files at the same directory that share the canonical's filename stem. Excludes the canonical itself, non- audio extensions (sidecars handled separately by `_delete_track_sidecars`), and different-stem tracks (different songs in the same dir). - `_move_sibling_to_destination(sibling_src, canonical_dst) -> str` — moves a sibling-format file to the canonical's destination dir with the canonical's renamed stem + the sibling's original extension. Defensive — OS errors logged at warning, return None, doesn't raise (caller treats as best-effort). After the fix: 1. `.opus` → moved to new dir 2. `.flac` sibling detected → moved to same new dir with same stem 3. Source `.opus` removed, `.lrc` sidecar deleted from source 4. Source dir empty → cleanup proceeds normally 5. Both formats end up paired at the new location # Tests added (11) `tests/test_reorganize_orphan_format_handling.py`: - Sibling detection: finds `.flac` when `.opus` is canonical (and symmetric direction), excludes canonical itself, excludes different-stem tracks, excludes non-audio (`.lrc`/`.nfo`), finds multiple siblings (3+ formats), returns empty when source dir missing - Sibling move: renames to canonical stem + preserves sibling extension, creates destination dir if missing, no-op when source already at destination, returns None on OS failure (caller treats as best-effort) # Verification - 11/11 new tests pass - 97/97 reorganize-related tests pass total (no regression in existing helpers) - Ruff clean # Follow-up in same PR Next commit: cleanup repair job for legacy "Unknown Artist / album_id" rows from the pre-#524 manual-import bug. Reorganize correctly leaves those alone (they're DB-broken, not file-broken), but a separate maintenance job to find + re-enrich them is needed. |
4 days ago |
|
|
812db1fbbf |
AcoustID scanner: prefer track_artist for compilation albums
Discord report (Skowl): downloaded a compilation album ("High Tea
Music: Vol 1") where every track has a different artist (Eclypse,
Andromedik, T & Sugah, Gourski, etc.) and the AcoustID scanner
flagged every single track as Wrong Song. The file tags had the
correct per-track artist (e.g. "Eclypse" for "City Lights"), but
the scanner compared against the album-level artist ("Andromedik",
the curator). Raw similarity 12% → Wrong Song flag.
# Why the prior multi-value fix didn't help
Foxxify's case (just-merged PR): AcoustID returned multi-value
credit "Okayracer, aldrch & poptropicaslutz!" — primary IS in the
credit. Splitting found it.
Skowl's case: both sides single-value but DIFFERENT artists.
Splitter has nothing to find — Eclypse simply isn't in "Andromedik".
Different bug.
# Cause
Scanner SQL at `core/repair_jobs/acoustid_scanner.py:281` joined
the `artists` table via `tracks.artist_id` which points at the
ALBUM artist (the curator/label-name applied to every row in a
compilation). The `tracks.track_artist` column already holds the
correct per-track artist for compilations — populated by every
server-scan path (Plex `originalTitle`, Jellyfin `ArtistItems`,
Navidrome per-track `artist`) AND the auto-import / direct-download
post-process flow (`record_soulsync_library_entry` writes it when
different from album artist). Scanner just wasn't reading it.
# Fix
```sql
SELECT t.id, t.title,
COALESCE(NULLIF(t.track_artist, ''), ar.name) AS artist,
...
```
Prefers per-track artist when populated, falls back to album artist
for legacy rows / single-artist albums where `track_artist` is NULL.
`NULLIF(t.track_artist, '')` handles the empty-string-instead-of-null
case some legacy rows might have.
# Composes with Foxxify's multi-value fix
For the rare compilation track where AcoustID ALSO returns a
multi-value credit (e.g. compilation track has multiple credited
performers), both paths work together — `track_artist` gives the
correct expected primary, then the helper splits the credit and
finds it.
# Tests added (2)
- `test_load_db_tracks_prefers_track_artist_for_compilation` —
reporter's exact case: track with `track_artist='Eclypse'` AND
`artist_id` pointing at album artist 'Andromedik' resolves to
'Eclypse'. Second track with NULL `track_artist` falls back to
album artist 'Andromedik' (single-artist + legacy compat).
- `test_load_db_tracks_falls_back_when_track_artist_empty_string`
— empty string in `track_artist` (some legacy rows) → NULLIF
returns NULL → COALESCE falls back to album artist.
Both use a real SQLite DB so the COALESCE/NULLIF logic + JOIN
runs against actual schema (SimpleNamespace fakes can't simulate
JOINs).
# Verification
- 6/6 scanner tests pass (2 new + 4 existing)
- 2586 full suite passes (+2 from prior commit)
- Ruff clean
|
4 days ago |
|
|
df304eb016 |
AcoustID scanner: handle multi-value artist credits
Discord report (Foxxify): the AcoustID scanner repair job flagged
multi-artist tracks as Wrong Song because AcoustID returns the
FULL credit ("Okayracer, aldrch & poptropicaslutz!") while the
library DB carries only the primary artist ("Okayracer"). Raw
SequenceMatcher similarity scored ~43% — well below the 60%
threshold — so the scanner created a finding even though the
audio was correct. User couldn't fix without lowering the global
artist threshold to ~30% (which would let real mismatches through).
# Fix
Extended the shared `core/matching/artist_aliases.py::artist_names_match`
helper (originally lifted for #441) with credit-token splitting.
When the actual artist string contains common separators —
- punctuation: `,` `&` `;` `/` `+`
- keywords (whitespace-bounded): `feat.` `ft.` `featuring` `with`
`vs.` `x`
— the helper splits into individual contributors and checks each
against the expected artist. Primary-in-credit cases now resolve
at 100% instead of 43%.
Two pattern groups because punctuation separators don't need
surrounding whitespace, but keyword separators MUST be
whitespace-bounded — otherwise we'd split artists with `x` /
`with` etc. in their names ("JAY-X" → "JAY-" / "" issue).
Composes with the existing alias path: cross-script multi-artist
credits ("Hiroyuki Sawano" expected, "澤野弘之, FeaturedJp"
actual) work via alias-token-against-credit-token compare.
# Wire-in
Scanner at `core/repair_jobs/acoustid_scanner.py:202` replaces
the raw `SequenceMatcher` call with `artist_names_match`. Pass
RAW artist strings (not pre-normalised by `_normalize`) so the
splitter can recognise separators — `_normalize` strips ALL
punctuation, which destroyed the very tokens the splitter needs.
The AcoustID post-download verifier (`core/acoustid_verification.py`)
already routes through `_alias_aware_artist_sim` which calls the
same helper — gets the multi-value benefit automatically without
a separate wire-in.
# New `split_artist_credit` exported helper
Pure-function helper for callers who want token-level access to
the credit list (debugging, UI, future per-token enrichment). Same
splitter logic, exposed as a top-level function.
# Tests added (14)
`tests/matching/test_artist_aliases.py` (+11):
- `TestSplitArtistCredit` — parametrised across 12 credit-string
formats (comma, ampersand, semicolon, slash, plus, feat./ft./
featuring, with, vs., x, single-token, empty), drops empty
tokens, strips per-token whitespace
- `TestMultiValueCreditMatching` — reporter's exact case
(Okayracer in 3-artist credit → 100%), primary in middle/end of
credit, genuine-mismatch still fails, single-token actual falls
through to direct compare, multi-value composes with aliases,
threshold still respected
`tests/test_acoustid_scanner.py` (+3):
- Reporter's case end-to-end through `_scan_file` — fingerprint
99% / title 100% / multi-artist credit → no finding created
- Genuine artist mismatch still creates finding (no false
suppression of real mismatches)
- `JobResultStub` minimal scaffold for the integration tests
# Verification
- 14 new tests pass (49 helper + 5 scanner total in their files)
- 110 matching + scanner tests pass total
- 2584 full suite passes (+25 from baseline 2559)
- Ruff clean
- Reporter's exact case (Okayracer in `Okayracer, aldrch &
poptropicaslutz!`) now scores 100% match → no Wrong Song flag
|
4 days ago |
|
|
8a4c0dc92a |
Deezer cover-art download: fallback to original URL on CDN refusal
Defensive followup. If Deezer CDN ever refuses the upgraded 1900×1900 URL for a specific album (rare — empirically tested 4 albums and none hit it), pre-fix would have succeeded with the 1000×1000 URL and post-fix would have failed entirely. Both download sites now retry with the original URL when the upgraded URL fails: - `core/metadata/artwork.py::download_cover_art` — auto post-process flow. Resolves the original URL from album_info / context the same way the existing path does. - `core/tag_writer.py::download_cover_art` — captures the original URL before upgrade so the retry has it without a second context lookup. Strictly non-regressive: worst plausible post-fix case is now identical to pre-fix (cover at 1000×1000 succeeds). Fallback only fires on the rare CDN-refusal edge. Tests added (2): - `test_tag_writer_retries_with_original_on_failure` — upgraded URL raises, original succeeds, both attempts logged in call order - `test_tag_writer_no_fallback_for_non_dzcdn_url` — non-Deezer URLs go through unchanged, no fallback path triggered (single attempt) Verification: - 18/18 helper + integration tests pass - 2561 full suite passes - Ruff clean |
4 days ago |
|
|
80cf16339c |
Deezer cover art: upgrade CDN URL to 1900×1900 (was embedding 1000×1000)
Discord report (Tim): downloaded cover art via Deezer metadata
source came out visibly blurry in Navidrome / on phones — large
displays exposed the limited resolution.
# Cause
Deezer's API returns `cover_xl` URLs at 1000×1000. The underlying
CDN actually serves up to 1900×1900 by rewriting the size segment
in the URL path (same trick the iTunes mzstatic + Spotify scdn
upgrades already use). SoulSync wasn't doing the rewrite — every
Deezer-sourced cover got embedded at 1000×1000 regardless of how
much higher resolution the CDN had available.
# Verified empirically
```
$ for size in 1000 1400 1800 1900 2000; do curl -I "...{size}x{size}-..."; done
1000: 200 OK 106 KB
1400: 200 OK 198 KB
1800: 200 OK 331 KB
1900: 200 OK 371 KB
2000: 403 Forbidden
```
1900 is the safe ceiling. Above that the CDN returns 403. CDN
serves source-native bytes when source < target (smaller-source
albums get same bytes whether we ask for 1000 or 1900), so asking
for 1900 universally is safe.
# Fix
New `_upgrade_deezer_cover_url(url, target_size=1900)` helper in
`core/deezer_client.py`. Pure function, mirrors the
`_upgrade_spotify_image_url` pattern that already lives in
`core/spotify_client.py`. Defensive on every input shape:
- Empty / None → returned as-is
- Non-Deezer URL (no `dzcdn`) → returned as-is
- No size segment in URL → returned as-is
- Already at/above target → returned as-is (idempotent, never
downgrades)
Applied at both cover-download sites:
- `core/metadata/artwork.py::download_cover_art` — auto post-process
flow. Mirrors the existing iTunes mzstatic upgrade right above it.
- `core/tag_writer.py::download_cover_art` — enhanced library view's
"Write Tags to File" feature.
# Scope discipline
- Helper applied at the DOWNLOAD boundary, not the source extraction
point in `deezer_client.py`. Means cached entries in the metadata
cache + DB row `image_url` columns keep the original 1000×1000 URL
Deezer's API returned. Future CDN behavior changes only affect the
download path, not stored data.
- Pre-existing `prefer_caa_art` toggle (Settings → Library →
Post-Processing) untouched — orthogonal workaround for users who
want even higher quality (MusicBrainz Cover Art Archive, often
3000×3000+).
- iTunes / Spotify upgrade paths untouched — they already worked.
# Tests added (16)
`tests/metadata/test_deezer_cover_url_upgrade.py`:
- Standard upgrade: default target 1900 on cover URL, alternate
dzcdn host (`e-cdns-images.dzcdn.net` vs `cdn-images.dzcdn.net`),
artist picture URLs (same path pattern), 500×500 source upgrades
too
- Custom target size: smaller target = no-op (never downgrade),
larger target works
- Idempotent: already at/above target returned unchanged
- Defensive on non-Deezer URLs: parametrised across 5 hosts
(Spotify scdn, iTunes mzstatic, MB CAA, Last.fm, random) — all
returned untouched
- Defensive on malformed Deezer URL (no size segment) → returned
as-is
- Empty / None handling
# Verification
- 16/16 helper tests pass
- 560/560 metadata + imports tests pass (no regression)
- 2559 full suite passes
- Ruff clean
|
4 days ago |
|
|
bc34d39ce9 |
Tighten alias-lookup trust + add ambiguity gate + diagnostic log
Cin pre-review pass on the false-positive risk. Three tightenings:
# 1. Bumped MB-search trust threshold from 0.6 → 0.85
`MusicBrainzService.lookup_artist_aliases` previously trusted any
MB search match scoring ≥ 0.6 combined (name-similarity + MB
relevance). For distinctive cross-script artists the user-reported
case targets (Hiroyuki Sawano, Сергей Лазарев, etc.) real matches
score ~1.0 — well above 0.85. The 0.6 floor was loose enough to
let in moderate matches for ambiguous names, risking aliases for
the wrong artist getting cached + applied.
Bumped to 0.85. Tighter without rejecting any of the legit
cross-script cases the PR is for.
# 2. Ambiguity gate — skip when results within 0.1 of best
When MB search returns multiple results all scoring high (within
0.1 of the best), the artist name is ambiguous — common name with
multiple distinct artists ("John Smith" returning 10 different
John Smiths). Pulling aliases for any one of them risks the wrong
artist's data bridging incorrectly to a file's tag.
Added explicit ambiguity detection: when 2+ results within 0.1,
skip alias lookup entirely + cache empty. Matches Cin's
"explicit > implicit" — the prior code just picked the highest
score blindly.
# 3. Diagnostic log when alias rescues a comparison
When the alias path triggers a PASS that direct similarity would
have FAILed, emit an INFO log: `Artist alias rescued comparison:
expected='X' vs actual='Y' (direct sim=0.00, alias 'Z' →
score=1.00)`.
Lets future bug reports trace which alias triggered which decision.
Doesn't change behavior — visibility only. Logs ONLY the rescue
case, not happy-path direct matches (no log spam).
# Tests added (5)
`test_artist_alias_service.py` (+3):
- `test_moderate_confidence_match_now_skipped_strict_threshold`
- `test_ambiguous_results_skipped`
- `test_unambiguous_high_confidence_match_succeeds`
`test_acoustid_verification_aliases.py` (+3):
- `test_alias_rescue_emits_info_log` — direct-fail + alias-pass
emits INFO log
- `test_no_log_when_direct_match_succeeds` — happy path quiet
- `test_no_log_when_alias_doesnt_help` — failed path also quiet
# Test infrastructure note
Logging tests use a directly-attached `ListHandler` on
`soulsync.acoustid.verification` (the actual logger name —
dot-separated by `get_logger`), NOT pytest's caplog. Same pattern
as the prior watchdog-test fix — caplog is intermittently flaky
in full-suite runs for soulsync namespace loggers. An owned
handler sidesteps both issues.
# Verification
- 85/85 matching tests pass (+5 from prior commit)
- 2543 full suite passes (+6 from prior, +85 PR-total)
- Ruff clean
- Reporter's Japanese + Russian regression tests still pass —
legit cross-script case (sim ≈ 1.0) clears the new 0.85
threshold easily
|
4 days ago |
|
|
11397307b2 |
Alias resolution polish: lazy-fire on direct-match failure + worker backfill
Two perf gaps that would have failed Cin's review: # Gap #1: alias lookup fired unconditionally Pre-fix in this commit, `_resolve_expected_artist_aliases` ran at the top of every `verify_audio_file` call regardless of whether the direct artist match would have passed. For users whose library is mostly same-script (95% of cases), every successful verification was paying for a wasted DB query (and possibly a wasted MB API call for un-enriched artists). Restructured the helper to accept a callable provider instead of a pre-resolved list. Provider invoked LAZILY only when direct similarity falls below `ARTIST_MATCH_THRESHOLD`. Verifier passes a memoising thunk that resolves once across the 3 comparison sites within one verification. `_alias_aware_artist_sim` now accepts `aliases` as either: - iterable of strings (used eagerly — backward compat with tests that already know the aliases) - callable returning the iterable (resolved on first need within a verification) Happy path (direct match passes): zero DB queries, zero MB calls. Cross-script case: one resolution shared across 3 sites — same as the prior contract. # Gap #2: existing-MBID artists never got alias backfill Worker's `_process_item` artist branch had an `existing_id` short- circuit (line 296) that updated MBID status but skipped alias fetch. Result: every user with an already-enriched library had MBIDs but NULL aliases on day-one of this PR. Live MB lookup at verify-time covered them, but at the cost of N live calls for N artists across the library. Added one-time backfill: when existing-MBID is found AND `artists.aliases` for that row is empty, fetch + persist aliases. Subsequent re-scan cycles short-circuit on the populated column — no repeated MB calls. New helper `_artist_aliases_empty(artist_id)` does the cheap NULL check via direct SQL. Best-effort: defensively returns True on errors so backfill happens (a redundant MB call is cheaper than missing the backfill entirely). # Tests added (9) `test_acoustid_verification_aliases.py` (+6): - `TestLazyAliasResolution` (3): no lookup when direct match passes, lookup fires only when direct fails, lookup memoised across the 3 sites within one verification. - `TestAliasProviderCallable` (3): iterable passed directly, callable resolves lazily, callable returning empty falls back to direct sim. `test_artist_alias_service.py` (+3): - `test_existing_mbid_path_backfills_aliases_when_column_empty` - `test_existing_mbid_path_skips_backfill_when_aliases_already_set` - `test_existing_mbid_backfill_failure_does_not_break_match` # Verification - 79/79 matching tests pass (+9 from prior commit) - 2537 full suite passes (+9, +79 PR-total) - Ruff clean - Backward compat: every prior-commit test still passes (the iterable-shape API still works alongside the new callable shape) |
4 days ago |
|
|
7066233c37 |
Wire alias-aware artist match into AcoustID verifier — fixes #442
This is the user-visible commit. The reporter's exact two cases (Japanese kanji, Russian Cyrillic) now pass verification instead of being quarantined. # What changed Verifier's three artist-similarity sites now route through the shared `core.matching.artist_aliases.artist_names_match` helper instead of raw `_similarity`: - `_find_best_title_artist_match` (per-recording scoring at the best-match stage) - Secondary scan when title matches but best-match's artist doesn't (line ~355 pre-fix) - Final fallback scan over all recordings (line ~403 pre-fix) Aliases for the expected artist are resolved ONCE at the top of `verify_audio_file` via `_resolve_expected_artist_aliases`, which calls the new `MusicBrainzService.lookup_artist_aliases` chain (library DB → cache → live MB). Single resolution per verification regardless of how many AcoustID recordings come back — pinned by test. New helper `_alias_aware_artist_sim(expected, actual, aliases)` wraps the pure helper with the verifier's normaliser (`_similarity`) and threshold (`ARTIST_MATCH_THRESHOLD`). Returns a single float so existing threshold-comparison code paths keep their shape — minimal diff. # Reporter's cases — verified Case 1 (issue #442 verbatim): File: YAMANAIAME by 澤野弘之 Expected: YAMANAIAME by Hiroyuki Sawano Pre-fix: Quarantined (artist=0%) Post-fix: PASS (alias '澤野弘之' resolved from MB) Case 2 (issue #442 verbatim): File: On the Other Side by Sergey Lazarev Expected: On the other side by Сергей Лазарев Pre-fix: Quarantined (artist=7%) Post-fix: PASS (alias 'Sergey Lazarev' resolved from MB) Both reproduced as regression tests with stubbed MB service. # Backward compat Three test cases pin that no-aliases / failure paths preserve pre-fix behaviour exactly: - Clear artist mismatch (different artist, same script) still FAILs — aliases bridge synonyms, not unrelated artists. - Exact title + artist match still PASSes regardless of aliases. - MB service raise → verifier completes with direct similarity (treats failure as "no aliases available" — same as pre-fix). Also covers manual import: the import-modal "Search for Match" flow goes through the same verifier, so the reporter's complaint that "manual import simply throws them back in quarantine again" is fixed by the same change. # Tests added (11) `tests/matching/test_acoustid_verification_aliases.py`: - `_alias_aware_artist_sim`: alias bridges score ↑, no-aliases falls back, aliases don't mask genuine mismatches - `_find_best_title_artist_match` accepts + uses aliases - Reporter's case 1 (Japanese) end-to-end - Reporter's case 2 (Russian) end-to-end - Backward compat: no-aliases mismatch still fails, exact match still passes, MB-service-raise doesn't break verification - Performance: alias lookup fires ONCE per verification regardless of recording count # Verification - 11 new verifier tests pass - 31 prior service tests pass - 28 prior helper tests pass - 294 matching + imports tests pass total (no regression) - Ruff clean |
4 days ago |
|
|
15244f24cf |
Live MB lookup for un-enriched artists with cache
Previous commit only populated `artists.aliases` for artists the MB worker had enriched. But the AcoustID verifier (next commit) needs aliases for ANY expected artist — including: - Artists not yet in the user's library (first download) - Artists in the library where MB enrichment hasn't run yet - Artists where MB enrichment ran but found no MBID (NULL aliases) This commit adds a multi-tier resolution helper that fills those gaps without thrashing the MB API. # Multi-tier resolution `lookup_artist_aliases(artist_name) -> list[str]`: 1. **Library DB** (fast path): existing `get_artist_aliases` lookup by name. No network. Most common path once the worker has enriched everything. 2. **Cache** (existing `musicbrainz_cache` table, entity_type= `artist_aliases`): a prior live lookup for this name. Empty cache hit is respected (don't re-query when MB previously had nothing). 3. **Live MB**: search artist by name → pick highest-confidence match (combined name-similarity + MB relevance) → fetch aliases for that MBID → cache the result. Always returns a list (possibly empty), never raises. Empty result on any tier means "no alternate spellings found, fall back to direct match" — identical to the pre-fix behaviour. # Threshold gate Live lookup only trusts the MB search result when combined similarity score >= 0.6. Below that, we'd be guessing at the wrong artist — searching `John Smith` returns multiple John Smiths and pulling aliases for one of them could mismatch. Cache the empty result so we don't keep re-searching the same low-confidence name. # Performance contract Critical for the verifier path: 100 quarantine candidates with the same expected artist must NOT trigger 100 MB API calls. Cache hit on second + subsequent calls per unique artist name. Verified by test pinning the call counts. # Tests added (8) - Tier 1 library DB hit — no MB API call fired - Tier 3 live MB lookup → search → fetch → returns aliases - Tier 2 cache hit on second call — no re-query - Empty input → empty return + no API call - Network failure on search → empty + cached so we don't retry - No search results → empty + cached - Low-confidence match (sim < 0.6) skipped — defends against picking the wrong artist - Library row exists but aliases NULL → falls through to live lookup (defends against the half-enriched state) # Verification - 31/31 service tests pass (8 new + 23 prior) - Ruff clean |
4 days ago |
|
|
48d848bb74 |
MB worker populates artists.aliases on enrichment
Issue #442 — MusicBrainz exposes alternate-spelling aliases (Japanese kanji `澤野弘之` for `Hiroyuki Sawano`, Cyrillic `Сергей Лазарев` for `Sergey Lazarev`, etc.) on every artist record. SoulSync's MB enrichment worker had access to this data via `get_artist(mbid, includes=['aliases'])` but wasn't reading or persisting it. This commit wires the alias fetch into the worker's existing artist-match path, persists to the new `artists.aliases` column added in the prior commit, and adds a verifier-friendly read-by- name lookup so the AcoustID verifier (next commit) can resolve aliases without an MB round-trip when the artist is in the library. # New service methods - `fetch_artist_aliases(mbid) -> list[str]` — calls `mb_client.get_artist(mbid, includes=['aliases'])`, parses the alias array, dedupes case-insensitively. Returns empty list on any failure (missing key, network error, malformed response) so transient MB outages never trigger stricter quarantine decisions than the pre-fix behaviour. Empty mbid → no API call. - `update_artist_aliases(artist_id, aliases)` — persists as JSON array to `artists.aliases`. Idempotent — overwrites prior value. Empty list clears the column. None artist_id is a no-op. - `get_artist_aliases(artist_name) -> list[str]` — reads back by artist NAME (not id), case-insensitive. Used by the verifier where the expected artist comes from track metadata — there's no library row id at quarantine time. Returns empty list for unknown artists, missing data, or corrupt JSON (defensive against legacy rows). # Worker integration `MusicBrainzWorker._process_item` artist branch: - After `update_artist_mbid` succeeds, fetch aliases for the matched MBID and persist via `update_artist_aliases`. - Best-effort: alias fetch wrapped in try/except, failure logs at debug level, doesn't regress the match outcome. - No alias call when the artist didn't match an MBID (nothing to enrich). # Tests (23) - `fetch_artist_aliases`: extracts names from MB response, case-insensitive dedup, skips empty/null entries, missing-key fallback, network failure → empty, empty mbid no API call, verifies `inc=aliases` request param. - `update_artist_aliases`: persists as JSON, idempotent overwrite, empty list clears column, None id is no-op. - `get_artist_aliases`: returns aliases for known artist, case-insensitive lookup, empty for unknown artist / no-aliases row, handles corrupt JSON + non-list shape gracefully. - Worker integration: matched artist triggers fetch + persist, no alias call when not matched, alias-fetch failure doesn't break the match outcome. # Verification - 23/23 new tests pass - Ruff clean |
4 days ago |
|
|
235ada7e0f |
Add pure artist-name comparison helper with alias awareness
Issue #442 — files tagged with one spelling of an artist's name (Japanese kanji `澤野弘之`) get quarantined when SoulSync expects the romanized spelling (`Hiroyuki Sawano`). Raw similarity comparison scored 0% across scripts. MusicBrainz exposes alternate-spelling aliases on every artist record but the verifier never consulted them. This commit adds the pure helper that does the alias-aware comparison. No I/O, no DB access, no network. Caller supplies the aliases (looked up from library DB or live MB by later commits in this PR). Default threshold matches the verifier's existing `ARTIST_MATCH_THRESHOLD` (0.6) so wiring this in preserves current pass/fail semantics on the no-alias path. # API ``` artist_names_match(expected, actual, *, aliases=None, threshold=0.6, similarity=None) -> (matched: bool, best_score: float) ``` - Direct compare first (fast path + baseline score) - If below threshold, score each alias against `actual` - First alias to clear threshold → match - Returns the best score across all candidates so callers can log the score they made the decision on ``` best_alias_match(expected, actual, aliases=None, *, similarity=None) -> (winner: Optional[str], best_score: float) ``` Companion helper for callers that want to surface WHICH alias triggered the match (debug logs, UI explanations). No threshold — purely informative. # Architectural choices - **Pure function**: no I/O. Caller (verifier, future matching-engine consumers) owns alias lookup strategy + threshold tuning. - **Custom similarity callable**: lets the verifier pass its parenthetical-stripping normaliser without this module having to know about it. Defaults to lowercase + SequenceMatcher (matches the verifier's existing behaviour). - **Defensive coercion**: aliases input handles None entries, empty strings, non-string types, sets, tuples, lists — caller may feed raw MB response data without cleaning first. - **Backward compat**: `aliases=None` or empty → behaves identically to a plain similarity check. Paths not yet wired up to alias lookup see no behaviour change. # Tests (28) - Direct compare (no aliases): exact / case / whitespace / fuzzy / different - Cross-script with aliases: Japanese ↔ romanized (reporter's case 1), Cyrillic ↔ Latin (reporter's case 2), symmetric direction, no-match fallthrough so aliases don't mask genuine mismatches - Aliases input handling: None, empty, set, tuple, None-entries, non-string entries - Threshold: default matches verifier's 0.6, custom stricter, custom looser - Custom similarity: applies to both direct + alias compare - Best-alias-match introspection - Backward compat parametrised across 5 cases # What this commit does NOT do This is the helper module + tests only. Subsequent commits in this PR populate aliases (MB worker), provide live MB lookup with cache for un-enriched artists, and wire the helper into the AcoustID verifier where the quarantine decision actually fires. |
4 days ago |
|
|
c02d51d60d |
Plex: trigger_library_scan + is_library_scanning use auto-detected section — fixes #535
# Bug
Plex servers with the music library named anything other than "Music"
(Música, Musique, Musik, Musica, 音乐, موسيقى, etc.) hit this error
after every import cycle:
soulsync.plex_client - ERROR - Failed to trigger library scan
for 'Music': Invalid library section: Music
soulsync.web_scan_manager - ERROR - Failed to initiate PLEX
library scan via web
Side effect: `wishlist.processing` kept reporting "Missing from
media server after sync" for tracks that DID import correctly, so
they got perpetually re-added to the wishlist.
# Root cause
`_find_music_library` correctly auto-detects the music section by
`section.type == 'artist'` and stores it on `self.music_library` —
works for any locale because the type is language-neutral. Read
methods (`get_artists`, etc.) route through `_get_music_sections`
which returns `[self.music_library]`, so they never had the bug.
But `trigger_library_scan` and `is_library_scanning` ignored
`self.music_library` and called
`self.server.library.section(library_name)` directly with the
hardcoded `"Music"` default. `server.library.section('Music')`
raises `NotFound` on any server whose section isn't literally
named "Music".
# Fix
Both methods now prefer `self.music_library` first, fall back to
literal `library_name` lookup only when auto-detection hasn't
populated the cached reference (test fixtures, edge cases).
`is_library_scanning`'s activity-feed match also corrected to
filter by the resolved section's actual title — the prior code
matched `library_name.lower() in activity_title.lower()` which
defaults to "music" and would never match activities for
non-English sections.
`trigger_library_scan`'s success log line now surfaces the actual
section title (`Música`) instead of the unused `library_name`
default ("Music") — confusing when debugging on non-English servers.
# Tests added (13)
`tests/media_server/test_plex_non_english_section_name.py`:
- `test_uses_auto_detected_section_regardless_of_locale` — parametrised
across 6 locale variants (Música, Musique, Musik, Musica, 音乐, موسيقى).
Each verifies trigger_library_scan calls the auto-detected
section's `update()`, NOT a literal-name fallback. Stub raises
AssertionError on `server.library.section()` so a regression that
re-introduces the fallback fails loudly.
- `test_falls_back_to_literal_lookup_when_no_auto_detection` —
backward compat: music_library=None → literal lookup as before.
- `test_explicit_library_name_arg_used_only_when_no_auto_detection` —
auto-detected wins over explicit kwarg when both available.
- `test_logs_correct_section_label_on_success` — log line surfaces
resolved section title.
- 4 symmetric tests for is_library_scanning covering refreshing-attr
check, activity-feed title match, no-match for unrelated sections,
fallback path.
# Verification
- 13 new tests pass
- 84/84 media_server tests pass (no regression in the existing
Plex / Jellyfin / Navidrome suite)
- 2458 full suite passes (+13 from baseline)
- Ruff clean
|
5 days ago |
|
|
402d851cac |
Deezer search: drop advanced-syntax at endpoint, free-text + rerank wins
Live-API verification revealed advanced-syntax queries hurt more than they help on this endpoint. Switching the import-modal Deezer search back to free-text + local rerank. # What live testing showed Hit Deezer's public API with both query forms for the issue #534 case (`Dirty White Boy` + `Foreigner`): **Free-text (`q=Dirty White Boy Foreigner`):** - Returns 21 results - Real Foreigner Head Games studio cut at #1 - Live versions at #2-10 - Karaoke / cover variants at #11-15 **Advanced (`q=track:"Dirty White Boy" artist:"Foreigner"`):** - Returns 12 results - "(2008 Remaster)" at #1 — canonical Head Games cut MISSING from top 8 entirely - Live + alt-album versions follow Advanced syntax DOES filter karaoke at the API level (none in the 12-result set vs. 5 at positions 11-15 in free-text), but it has its own ranking bias that surfaces remasters / "Best Of" cuts ahead of the canonical recording. Net regression for the user- facing goal. # Fix 1. Endpoint reverts to free-text query with local rerank applied. 2. Local rerank gains "remaster" / "remastered" / "reissue" patterns under VARIANT_TAG_PATTERNS (soft 0.4× penalty — user may want them but they shouldn't outrank the original). 3. Client kwarg support (`track=` / `artist=` / `album=`) preserved for future opt-in callers (e.g. exact-match flows where API- level filtering matters more than ranking). # Verified end-to-end against live Deezer API Re-ran the exact #534 case through the live API + new rerank. Top 15 results post-rerank: 1. Dirty White Boy — Foreigner — Head Games ← REAL CUT AT TOP 2-10. Various Live versions 11-15. Karaoke / cover / tribute variants ← BURIED Real Foreigner Head Games studio cut at #1, exactly the user's ask. # Tests - `test_relevance.py` — variant tag patterns extended; existing tests still pass (50 tests). - `test_search_match_endpoints.py::test_joins_track_and_artist_into_free_text_query` — replaces `test_passes_track_and_artist_as_kwargs`; verifies endpoint sends free-text join, NOT field-scoped kwargs (the prior test asserted the wrong direction now). - Karaoke-burying assertion at the endpoint still pins the user-visible behaviour. - Client kwarg path tests untouched (still pin advanced-syntax construction for future opt-in callers). # Verification - 75 relevance + endpoint + query tests pass - 2445 full suite passes - Ruff clean - Live Deezer API shows real cut at #1 post-rerank |
5 days ago |
|
|
59992d42a8 |
Deezer search: free-text fallback when advanced query returns 0
Defensive followup to the relevance fix. Deezer's advanced search
syntax (`artist:"X"`) is documented as substring match, but in
practice it's brittle on artist name variants ("Foreigner [US]",
"The Foreigner") and on tracks indexed under non-canonical title
spellings. When the advanced query returns nothing, we'd previously
land at "No matches" — a regression vs. pre-fix behaviour where
free-text would have returned a less-relevant but non-empty set.
Fix: when the advanced query returns 0 results AND the caller used
field-scoped kwargs, fall back to a free-text join of the same
kwargs and re-query. Caller-side rerank still tightens whatever the
fallback returns, so the worst-case post-fix behaviour is the
pre-fix behaviour — never strictly worse.
Pulled the cache + parse + store dance into a private helper
(`_search_tracks_with_query`) so the orchestration can call it
twice (advanced → fallback) without code duplication. Single API
call when the advanced query has results — no wasted requests.
Diagnostic logger.debug fires when the fallback triggers so we can
see in production whether it's happening (and to which queries).
# Tests added (4)
- `test_falls_back_to_free_text_when_advanced_empty` — advanced
query returns 0, free-text returns hits; client returns the
free-text hits + both API calls fire.
- `test_no_fallback_when_advanced_query_has_results` — single hit
on advanced query → no second API call.
- `test_no_fallback_when_legacy_free_text_call` — legacy callers
already exhausted the only path; empty result is final.
- `test_no_fallback_when_query_unchanged` — empty kwargs path
doesn't trigger the fallback branch (used_advanced=False).
# Existing tests updated
The 4 prior `TestSearchTracksQueryWiring` + `TestSearchTracksCacheKey`
tests were stubbing `_api_get` to return empty `{'data': []}` and
asserting `assert_called_once`. With the new fallback, those stubs
trigger a second API call and the assertions break — even though
the FIRST call construction is what the tests cared about. Updated
the stubs to return one fake hit so the fallback doesn't fire, and
switched to `call_args_list[0]` for first-call inspection.
# Verification
- 18/18 deezer query tests pass (14 prior + 4 new)
- 2445 full suite passes (+4 from prior commit)
- Ruff clean
|
5 days ago |
|
|
8603cd6680
|
Preserve server source during album fill
- derive the destination server_source from the target album context - write it on copied rows and retarget moved rows too - cover the copy branch with a regression test |
5 days ago |