Persist Find & Add selections as permanent server-playlist match overrides

Closes #585. When a Spotify source track had a versioned suffix not
present in the local file ("Iron Man - 2012 - Remaster" vs "Iron Man"),
the auto-matcher missed the pair. User could click Find & Add to pick
the right local file — that worked, file got added to the Plex
playlist — but the source track stayed in Missing while the added
file appeared in Extra, because the matcher kept no record of the
user-confirmed pairing. On the next sync the source track re-tried
to download.

Fix: every Find & Add selection now writes a (spotify_track_id →
server_track_id) override into sync_match_cache at confidence=1.0.
The matching algorithm runs an override pass BEFORE the existing
exact and fuzzy passes, so any user-confirmed pair short-circuits
straight to "matched" without going through title normalization.
Covers every mismatch class — dash-suffix remasters, covers /
karaoke, alt masters, cross-language titles, typo'd local files.

- core/sync/match_overrides.py (new) — pure helpers
  resolve_match_overrides + record_manual_match. 18 boundary tests
  pin: cache hits, cache misses falling through to normal matching,
  stale-cache (server track removed) handled gracefully, str/int
  id coercion, partial cache hits, defensive against non-dict
  inputs and DB exceptions.
- web_server.py — get_server_playlist_tracks runs the override
  pre-pass before exact/fuzzy matching. server_playlist_add_track
  accepts source_track_id + source_title + source_artist and
  persists the override after every successful add (Plex / Jellyfin
  / Navidrome). source_track_id added to source_tracks payload so
  the frontend has it.
- webui/static/pages-extra.js — _serverSelectTrack sends
  source_track_id + source_title + source_artist when adding a
  track from a mirrored playlist context.
- Sync match cache schema unchanged — already had UNIQUE
  (spotify_track_id, server_source) which fits the override
  semantics perfectly. Manual overrides distinguished from
  auto-discovered matches by confidence=1.0.

Full suite: 3010 passed.
pull/593/head
Broque Thomas 1 day ago
parent 4226be761a
commit 083355ec8c

@ -0,0 +1,119 @@
"""Sync match overrides — user-confirmed source→server track pairings.
When a user picks a local file via "Find & Add" on the Server Playlist
compare view, that selection should persist as a hard match across
future syncs bypassing the fuzzy/exact title-match algorithm
entirely. This module provides pure helpers that the web layer calls
to resolve and persist those overrides through the existing
`sync_match_cache` table.
Override semantics:
- One mapping per (source_track_id, server_source). UNIQUE
constraint on the table enforces single mapping per pair.
- Stored with confidence=1.0 to distinguish from auto-discovered
matches (which use the actual title-similarity score).
- Read at the START of the matching algorithm before pass-1
exact and pass-2 fuzzy. Skipped sources don't re-enter the
normal matching pool.
- Stale-cache safe: if the cached server_track_id doesn't exist
in the current server_tracks list (track removed from server),
the override is silently skipped and normal matching runs.
"""
from __future__ import annotations
from typing import Any, Callable, Dict, List, Optional
def resolve_match_overrides(
source_tracks: List[Dict[str, Any]],
server_tracks: List[Dict[str, Any]],
cache_lookup: Callable[[str], Optional[Any]],
) -> Dict[int, int]:
"""Map source-track indexes to server-track indexes for cached overrides.
Pure function. `cache_lookup(source_track_id) -> server_track_id or
None` is injected by the caller (web layer wraps the DB call).
Returns ``{source_idx: server_idx}``. Only includes pairs where:
- source_track has a non-empty `source_track_id`
- cache_lookup returns a server_track_id
- that server_track_id exists in server_tracks (no stale cache
entries pointing at deleted tracks)
- the server_track hasn't already been claimed by an earlier
override (defensive UNIQUE on the cache table prevents this
in practice)
Caller uses the returned dict to short-circuit the per-source
matching loop: indices in the dict skip the exact/fuzzy passes.
"""
if not source_tracks or not server_tracks:
return {}
server_id_to_idx: Dict[str, int] = {}
for j, svr in enumerate(server_tracks):
sid = svr.get("id") if isinstance(svr, dict) else None
if sid is not None:
key = str(sid)
if key not in server_id_to_idx:
server_id_to_idx[key] = j
overrides: Dict[int, int] = {}
used_server: set[int] = set()
for i, src in enumerate(source_tracks):
if not isinstance(src, dict):
continue
src_id = src.get("source_track_id")
if not src_id:
continue
try:
cached_server_id = cache_lookup(str(src_id))
except Exception:
cached_server_id = None
if not cached_server_id:
continue
j = server_id_to_idx.get(str(cached_server_id))
if j is None or j in used_server:
continue
overrides[i] = j
used_server.add(j)
return overrides
def record_manual_match(
db: Any,
source_track_id: str,
server_source: str,
server_track_id: Any,
server_track_title: str = "",
source_title: str = "",
source_artist: str = "",
) -> bool:
"""Persist a user-confirmed source→server pairing as a hard override.
Wraps `db.save_sync_match_cache` with confidence=1.0 (the manual
match marker). Normalized title/artist fields are informational
only the cache is keyed by `(spotify_track_id, server_source)`,
so the normalization is just for inspection and future debugging.
Returns True on persist success, False on any failure (DB, missing
args, etc). Never raises.
"""
if not source_track_id or not server_source or server_track_id is None:
return False
if not hasattr(db, "save_sync_match_cache"):
return False
try:
return bool(db.save_sync_match_cache(
spotify_track_id=str(source_track_id),
normalized_title=(source_title or "").lower().strip(),
normalized_artist=(source_artist or "").lower().strip(),
server_source=server_source,
server_track_id=server_track_id,
server_track_title=server_track_title or "",
confidence=1.0,
))
except Exception:
return False

@ -0,0 +1,193 @@
from unittest.mock import MagicMock
from core.sync.match_overrides import record_manual_match, resolve_match_overrides
# ──────────────────────────────────────────────────────────────────────
# resolve_match_overrides — pre-pair source→server from cache
# ──────────────────────────────────────────────────────────────────────
def test_empty_inputs_return_empty_dict():
assert resolve_match_overrides([], [], lambda _id: None) == {}
assert resolve_match_overrides([{"source_track_id": "x"}], [], lambda _id: "y") == {}
assert resolve_match_overrides([], [{"id": "y"}], lambda _id: None) == {}
def test_single_cache_hit_returns_pair():
sources = [{"source_track_id": "spotify-iron-man", "name": "Iron Man - 2012 - Remaster"}]
servers = [{"id": 5001, "title": "Iron Man"}]
cache = {"spotify-iron-man": 5001}
result = resolve_match_overrides(sources, servers, lambda sid: cache.get(sid))
assert result == {0: 0}
def test_multiple_overrides_resolve_correctly():
sources = [
{"source_track_id": "iron"},
{"source_track_id": "para"},
{"source_track_id": "war"},
]
servers = [
{"id": 5001, "title": "Iron Man"},
{"id": 5002, "title": "Paranoid"},
{"id": 5003, "title": "War Pigs"},
]
cache = {"iron": 5001, "para": 5002, "war": 5003}
result = resolve_match_overrides(sources, servers, lambda sid: cache.get(sid))
assert result == {0: 0, 1: 1, 2: 2}
def test_source_without_track_id_skipped():
sources = [
{"source_track_id": "iron", "name": "Iron Man"},
{"name": "Paranoid"}, # no source_track_id (e.g. legacy / non-mirrored)
]
servers = [{"id": 5001, "title": "Iron Man"}, {"id": 5002, "title": "Paranoid"}]
cache = {"iron": 5001}
result = resolve_match_overrides(sources, servers, lambda sid: cache.get(sid))
assert result == {0: 0}
def test_cache_miss_skipped():
sources = [{"source_track_id": "iron"}, {"source_track_id": "para"}]
servers = [{"id": 5001, "title": "Iron Man"}, {"id": 5002, "title": "Paranoid"}]
result = resolve_match_overrides(sources, servers, lambda sid: None)
assert result == {}
def test_stale_cache_pointing_at_missing_server_track_skipped():
# User cached a match → file got deleted from server → server_tracks
# no longer has 5001 → don't pair, fall through to normal matching.
sources = [{"source_track_id": "iron"}]
servers = [{"id": 9999, "title": "Different Track"}]
cache = {"iron": 5001} # 5001 no longer exists
result = resolve_match_overrides(sources, servers, lambda sid: cache.get(sid))
assert result == {}
def test_server_id_str_int_coercion():
# Cache might store ints, server_tracks might have str IDs (Plex
# ratingKey is str). Helper coerces both sides to str.
sources = [{"source_track_id": "iron"}]
servers = [{"id": "5001", "title": "Iron Man"}]
cache = {"iron": 5001} # int from cache
result = resolve_match_overrides(sources, servers, lambda sid: cache.get(sid))
assert result == {0: 0}
def test_two_sources_pointing_at_same_server_track_only_first_wins():
# Defensive — UNIQUE constraint prevents this in production but
# cache_lookup is injectable so we verify the safety.
sources = [{"source_track_id": "a"}, {"source_track_id": "b"}]
servers = [{"id": 5001, "title": "Iron Man"}]
cache = {"a": 5001, "b": 5001}
result = resolve_match_overrides(sources, servers, lambda sid: cache.get(sid))
assert result == {0: 0}
def test_cache_lookup_raising_treated_as_miss():
sources = [{"source_track_id": "iron"}]
servers = [{"id": 5001, "title": "Iron Man"}]
def boom(_sid):
raise RuntimeError("db down")
result = resolve_match_overrides(sources, servers, boom)
assert result == {}
def test_non_dict_source_or_server_skipped():
sources = [None, "string", {"source_track_id": "iron"}]
servers = [{"id": 5001, "title": "Iron Man"}]
cache = {"iron": 5001}
result = resolve_match_overrides(sources, servers, lambda sid: cache.get(sid))
# source idx 2 → server idx 0
assert result == {2: 0}
def test_server_without_id_skipped():
sources = [{"source_track_id": "iron"}]
servers = [{"title": "Iron Man"}] # no id
cache = {"iron": 5001}
result = resolve_match_overrides(sources, servers, lambda sid: cache.get(sid))
assert result == {}
def test_partial_cache_hits_only_pair_those():
sources = [
{"source_track_id": "iron"},
{"source_track_id": "para"},
{"source_track_id": "war"},
]
servers = [
{"id": 5001, "title": "Iron Man"},
{"id": 5002, "title": "Paranoid"},
{"id": 5003, "title": "War Pigs"},
]
# Only iron + war cached, para falls through to normal matching
cache = {"iron": 5001, "war": 5003}
result = resolve_match_overrides(sources, servers, lambda sid: cache.get(sid))
assert result == {0: 0, 2: 2}
# ──────────────────────────────────────────────────────────────────────
# record_manual_match — persist user-confirmed pair
# ──────────────────────────────────────────────────────────────────────
def test_record_persists_with_confidence_one():
db = MagicMock()
db.save_sync_match_cache.return_value = True
ok = record_manual_match(
db,
source_track_id="spotify-iron-man",
server_source="plex",
server_track_id=5001,
server_track_title="Iron Man",
source_title="Iron Man - 2012 - Remaster",
source_artist="Black Sabbath",
)
assert ok is True
db.save_sync_match_cache.assert_called_once()
kwargs = db.save_sync_match_cache.call_args.kwargs
assert kwargs["spotify_track_id"] == "spotify-iron-man"
assert kwargs["server_source"] == "plex"
assert kwargs["server_track_id"] == 5001
assert kwargs["server_track_title"] == "Iron Man"
assert kwargs["confidence"] == 1.0
assert kwargs["normalized_title"] == "iron man - 2012 - remaster"
assert kwargs["normalized_artist"] == "black sabbath"
def test_record_returns_false_when_required_fields_missing():
db = MagicMock()
assert record_manual_match(db, source_track_id="", server_source="plex", server_track_id=1) is False
assert record_manual_match(db, source_track_id="x", server_source="", server_track_id=1) is False
assert record_manual_match(db, source_track_id="x", server_source="plex", server_track_id=None) is False
db.save_sync_match_cache.assert_not_called()
def test_record_returns_false_when_db_save_returns_false():
db = MagicMock()
db.save_sync_match_cache.return_value = False
assert record_manual_match(db, source_track_id="x", server_source="plex", server_track_id=1) is False
def test_record_swallows_db_exception():
db = MagicMock()
db.save_sync_match_cache.side_effect = RuntimeError("db boom")
assert record_manual_match(db, source_track_id="x", server_source="plex", server_track_id=1) is False
def test_record_returns_false_when_db_lacks_method():
class NoSaveDB:
pass
assert record_manual_match(NoSaveDB(), source_track_id="x", server_source="plex", server_track_id=1) is False
def test_record_handles_empty_optional_strings():
db = MagicMock()
db.save_sync_match_cache.return_value = True
ok = record_manual_match(db, source_track_id="x", server_source="plex", server_track_id=1)
assert ok is True
kwargs = db.save_sync_match_cache.call_args.kwargs
assert kwargs["normalized_title"] == ""
assert kwargs["normalized_artist"] == ""
assert kwargs["server_track_title"] == ""

@ -18986,6 +18986,10 @@ def get_server_playlist_tracks(playlist_id):
'image_url': img,
'duration_ms': t.get('duration_ms', 0),
'position': t.get('position', 0),
# Spotify track id — required for the user-confirmed
# match override lookup (sync_match_cache). Null for
# iTunes-only sources.
'source_track_id': t.get('source_track_id') or '',
})
elif playlist_name:
# Legacy fallback: cross-reference with sync history
@ -19025,6 +19029,21 @@ def get_server_playlist_tracks(playlist_id):
used_server_indices = set()
unmatched_source = [] # (index_in_combined, src_dict) for fuzzy second pass
# Pass 0: User-confirmed match overrides from sync_match_cache.
# When a user previously picked a local file via "Find & Add",
# the (source_track_id → server_track_id) mapping was persisted
# at confidence=1.0. Apply those FIRST so they bypass the
# exact/fuzzy passes entirely. Stale-cache safe — if the cached
# server track no longer exists, the override is silently
# skipped and normal matching runs.
from core.sync.match_overrides import resolve_match_overrides
_db_for_overrides = get_database()
_override_pairs = resolve_match_overrides(
source_tracks,
server_tracks,
lambda src_id: ((_db_for_overrides.read_sync_match_cache(src_id, active_server) or {}).get('server_track_id')),
)
# Pass 1: Exact title match (normalized — strips feat./ft. qualifiers)
for i, src in enumerate(source_tracks):
src_name = src.get('name', '')
@ -19039,6 +19058,19 @@ def get_server_playlist_tracks(playlist_id):
'duration_ms': src.get('duration_ms', 0), 'position': src.get('position', i),
}
# Override hit — paired by user, skip exact/fuzzy matching.
if i in _override_pairs:
j_override = _override_pairs[i]
used_server_indices.add(j_override)
combined.append({
'source_track': src_entry,
'server_track': server_tracks[j_override],
'match_status': 'matched',
'confidence': 1.0,
'override': True,
})
continue
src_norm = _norm_title(src_name)
best_idx = -1
for j, svr in enumerate(server_tracks):
@ -19211,14 +19243,48 @@ def server_playlist_replace_track(playlist_id):
return jsonify({"success": False, "error": str(e)}), 500
def _persist_find_and_add_match(source_track_id, server_source, server_track_id, server_track_title, source_title, source_artist):
"""Wrap match-override persistence with the active DB. No-op when
source_track_id is missing (e.g. add to a non-mirrored playlist)."""
if not source_track_id:
return
try:
from core.sync.match_overrides import record_manual_match
ok = record_manual_match(
get_database(),
source_track_id=source_track_id,
server_source=server_source,
server_track_id=server_track_id,
server_track_title=server_track_title,
source_title=source_title,
source_artist=source_artist,
)
if ok:
logger.info(f"[ServerPlaylist] Persisted Find & Add override: {source_track_id}{server_track_id} ({server_source})")
except Exception as e:
logger.warning(f"[ServerPlaylist] Failed to persist Find & Add override: {e}")
@app.route('/api/server/playlist/<playlist_id>/add-track', methods=['POST'])
def server_playlist_add_track(playlist_id):
"""Add a track to a server playlist at a specific position."""
"""Add a track to a server playlist at a specific position.
When the optional `source_track_id` is provided (the Spotify track id
from a mirrored playlist), the user's selection is also persisted to
sync_match_cache so future syncs auto-match this sourceserver pair
without requiring the user to re-trigger Find & Add.
"""
try:
data = request.get_json()
track_id = data.get('track_id')
playlist_name = data.get('playlist_name', '')
position = data.get('position') # 0-based index; None = append
# Optional Spotify source track id — when present, the (source →
# server) mapping is persisted as a hard match override.
source_track_id = data.get('source_track_id') or ''
source_title = data.get('source_title') or ''
source_artist = data.get('source_artist') or ''
server_track_title = data.get('server_track_title') or ''
if not track_id:
return jsonify({"success": False, "error": "track_id required"}), 400
@ -19271,6 +19337,7 @@ def server_playlist_add_track(playlist_id):
new_id = str(raw_playlist.ratingKey)
logger.info(f"[ServerPlaylist] Added track to playlist, playlist ID: {new_id}")
_persist_find_and_add_match(source_track_id, active_server, track_id, server_track_title or new_item.title, source_title, source_artist)
return jsonify({"success": True, "message": "Track added", "new_playlist_id": new_id})
elif active_server == 'jellyfin' and media_server_engine.client('jellyfin'):
@ -19280,6 +19347,7 @@ def server_playlist_add_track(playlist_id):
track_ids.insert(pos, track_id)
new_track_objs = [type('T', (), {'ratingKey': tid, 'title': ''})() for tid in track_ids]
media_server_engine.client('jellyfin').update_playlist(playlist_name, new_track_objs)
_persist_find_and_add_match(source_track_id, active_server, track_id, server_track_title, source_title, source_artist)
return jsonify({"success": True, "message": "Track added"})
elif active_server == 'navidrome' and media_server_engine.client('navidrome'):
@ -19289,6 +19357,7 @@ def server_playlist_add_track(playlist_id):
track_ids.insert(pos, track_id)
new_track_objs = [type('T', (), {'ratingKey': tid, 'title': ''})() for tid in track_ids]
media_server_engine.client('navidrome').create_playlist(playlist_name, new_track_objs, playlist_id=playlist_id)
_persist_find_and_add_match(source_track_id, active_server, track_id, server_track_title, source_title, source_artist)
return jsonify({"success": True, "message": "Track added"})
return jsonify({"success": False, "error": f"Unsupported server: {active_server}"}), 400

@ -3416,6 +3416,7 @@ const WHATS_NEW = {
'2.5.2': [
// --- May 13, 2026 — 2.5.2 release ---
{ date: 'May 13, 2026 — 2.5.2 release' },
{ title: 'Server Playlists: Find & Add Now Persists As A Permanent Match', desc: 'github issue #585: when a spotify track name had a versioned suffix not present in the local file (e.g. "Iron Man - 2012 - Remaster" vs "Iron Man") the auto-matcher missed the pair. user could click Find & Add to manually pick the right local file — that worked, file got added to the plex playlist — but the source spotify track stayed in Missing while the added file showed up under Extra, because the matcher had no record of the user-confirmed pairing. on the next sync the source track would re-quarantine and try to download all over again. fix: every Find & Add selection now writes a `(spotify_track_id → server_track_id)` override into `sync_match_cache` at confidence=1.0. the matching algorithm runs an override pass BEFORE the existing exact and fuzzy passes, so any user-confirmed pair short-circuits straight to "matched" without going through normalization at all. covers every kind of mismatch — dash-suffix remasters, covers / karaoke versions, alt masters, cross-language titles, typo\'d local files, anything. logic lifted to `core/sync/match_overrides.py` (pure helpers `resolve_match_overrides` + `record_manual_match`). 18 boundary tests pin: cache-hit pairs, cache-miss falls through, stale-cache (server track removed) handled gracefully, two sources pointing at same server track (UNIQUE-violation defense), str/int id coercion, partial cache hits, defensive against non-dict inputs and DB exceptions. legacy entries without `source_track_id` (non-mirrored playlists) just skip the override path. works across plex / jellyfin / navidrome.', page: 'sync' },
{ title: 'Quarantine Management — See, Approve, Delete Files Without Touching The Filesystem', desc: 'github issue #584: quarantined files used to just sit in `ss_quarantine/` with a thin sidecar — no UI, no recovery, no way to see what got dropped or why. new **Quarantine** tab on the existing Library History modal (downloads page → Download History button) lists every quarantined file with the same row chrome as the Downloads + Server Imports tabs: thumb placeholder, expected track + artist, original filename, trigger badge (Duration / AcoustID / Bit Depth), relative time, expandable details panel showing the full failure reason. three per-row actions: **Approve** (restores the file, re-runs post-processing with ONLY the failing check skipped, lands in your library with full tags + lyrics + scan), **Recover** (legacy fallback for entries quarantined before this PR with thin sidecars — moves to Staging so you finish via Import flow), **Delete** (permanent removal of file + sidecar). all three use the themed soulsync confirm modal + toast feedback (no native browser alert / confirm). per-check bypass means approving a duration-mismatch file still runs AcoustID; approving an AcoustID failure still runs bit-depth — other quality gates stay live so you can only override one trigger at a time. files that fail a different check after approval get re-quarantined with the new trigger label so you can decide again. sidecar now persists the full json-safe context so approve has everything the pipeline needs to re-process. download modal status differentiates "🛡️ Quarantined" from "❌ Failed" so recoverable files are visible at a glance. logic lifted to pure helpers in `core/imports/quarantine.py` (list / delete / approve / recover_to_staging / serialize_quarantine_context) with 27 boundary tests covering orphan files / orphan sidecars / corrupt sidecars / collision-safe filename restoration / full-context vs thin-sidecar dispatch / json round-trip safety. four new endpoints. pipeline change is per-check conditionals at the existing quarantine sites — no blanket skip-all flag.', page: 'downloads' },
{ title: 'Configurable Duration Tolerance For Quarantined Tracks', desc: 'discord question: tracks were quarantining when their actual length drifted by a few seconds from what spotify/musicbrainz reported (3s tolerance hardcoded, 5s for tracks >10min). live recordings, alternate masterings, and some legitimate uploads routinely drift more than that. new setting on settings → metadata → post-processing: "duration tolerance (seconds)". `0 = auto` (preserves the existing 3s/5s defaults). raise it to 10 / 15 / 20 if your library has a lot of drift-prone material. capped at 60s — past that the check is effectively off. applies to ALL matched downloads (soulseek / tidal / qobuz / hifi / youtube / deezer-direct) since they all flow through the same post-process integrity check. logic lifted to a pure helper `core/imports/file_integrity.py:resolve_duration_tolerance` that coerces the config value (none / empty / 0 / negative / unparseable / above-cap) to either a float override or `None` for the auto-scaled default. 12 tests pin every input shape.', page: 'settings' },
{ title: 'Soulseek Downloads: Multi-Artist Tags Now Get Written Properly', desc: 'discord report: tracks downloaded via soulseek were getting tagged with primary artist only (no collab artists), while the same track downloaded via deezer tagged everyone correctly. trace: the soulseek matched-download context constructed `original_search_result` with `artist` (singular string) but no `artists` (list), even though the full multi-artist list lived on `track_info` (the matched spotify track object). `core/metadata/source.py:extract_source_metadata` only read `original_search.artists`, so soulseek path always fell through to the single-artist branch. fix: lifted artist resolution into a pure helper `core/metadata/artist_resolution.py:resolve_track_artists` that walks `original_search.artists` → `track_info.artists` → `artist_dict.name` fallback chain. handles all three list-item shapes (spotify-style dicts, bare strings, anything else stringified). 13 tests pin the resolution order, fallback chain, mixed-shape normalization, whitespace stripping, empty/none handling. composes with the existing deezer per-track upgrade (still fires when single-artist + track_id available) and feat_in_title / artist_separator settings (still drive the joined ARTIST string downstream).', page: 'downloads' },

@ -1821,6 +1821,11 @@ async function _serverSelectTrack(trackIndex, mode, newTrackId, el) {
for (let k = 0; k < trackIndex; k++) {
if (_serverEditorState.tracks[k]?.server_track) serverPos++;
}
// source_track carries source_track_id (Spotify ID) when this
// came from a mirrored playlist — the backend uses it to
// persist the Find & Add selection as a permanent match
// override so future syncs auto-pair without user action.
const srcTrack = track.source_track || {};
response = await fetch(`/api/server/playlist/${_serverEditorState.playlistId}/add-track`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
@ -1828,6 +1833,9 @@ async function _serverSelectTrack(trackIndex, mode, newTrackId, el) {
track_id: newTrackId,
playlist_name: _serverEditorState.playlistName,
position: serverPos,
source_track_id: srcTrack.source_track_id || '',
source_title: srcTrack.name || '',
source_artist: srcTrack.artist || '',
})
});
}

Loading…
Cancel
Save