The Dream
We built AssetHoard to solve a problem every game developer knows: assets scattered across dozens of folders, forgotten purchases buried in download directories, and no good way to find that perfect texture you know you bought three years ago.
The tech stack was solid. Tauri 2 for the native shell, SvelteKit for the frontend, Rust for the backend, SQLite for storage. Everything worked great in testing.
Then a beta tester tried to import a large asset bundle... 120,000 files.. The app froze for minutes... shit.
The Naive Approach
Our first implementation was straightforward. Scan the folder in Rust, send the file list to the frontend, let the user configure options, then import:
#[tauri::command]
fn scan_folder(path: String) -> Vec<ScannedFile> {
walkdir::WalkDir::new(path)
.into_iter()
.filter_map(|e| e.ok())
.filter(|e| is_supported_extension(e.path()))
.map(|e| ScannedFile::from_path(e.path()))
.collect()
}const files = await invoke('scan_folder', { path: selectedFolder });
// files now contains 120,000 objects
setScannedFiles(files);Simple. Elegant. Completely broken at scale.
The IPC Wall
Tauri's invoke system is fantastic for most use cases. You call a Rust function, it returns data, TypeScript receives it as a Promise. Under the hood, it serialises your Rust structs to JSON, passes them through the webview bridge, and deserialises them in JavaScript.
For 120,000 ScannedFile objects, that meant roughly 70MB of JSON serialisation in Rust, another 70MB crossing the IPC bridge, 70MB of JSON parsing in JavaScript, and 70MB stored in JavaScript memory.
During all of this, the UI thread is blocked. No spinner. No progress. Just a frozen window and a user wondering if the app crashed.
Attempt One: Stream the Data
Our first instinct was streaming. Send files in batches of 1,000, update a progress bar, keep the UI responsive.
for chunk in files.chunks(1000) {
app_handle.emit("scan-chunk", chunk)?;
}Better! The UI stayed responsive. But now we had a new problem: 80 separate IPC calls, each with serialisation overhead. The scan that took 8 seconds in pure Rust now took 45 seconds with the streaming overhead.
Worse still, we had to accumulate all those chunks in JavaScript memory anyway. We had traded one 70MB transfer for eighty 625KB transfers, plus the overhead of 80 event emissions.
The Revelation: Do Not Send What You Do Not Need
We stepped back and looked at what the import process actually involved. It was not a single operation but a multistep pipeline: scan the filesystem and count files, then process metadata for each asset, then generate thumbnails for previews. For smaller imports of around 2,000 files, running this as one synchronous operation worked extremely well. The user clicked import, waited a few seconds, and everything appeared in their library.
But at 120,000 files, that same pipeline ground to a halt. And the more we looked at it, the clearer it became that we were sending data to the frontend that the frontend did not actually need.
The user does not need to see 120,000 individual files. They need to see the total file count, a breakdown by type (textures, models, audio), a list of top-level folders, and maybe some detected engine packages. Everything else can stay in Rust until import time.
The Cache Pattern
We introduced a scan cache on the Rust side:
lazy_static! {
static ref SCAN_CACHE: Mutex<HashMap<String, Vec<ScannedFile>>> =
Mutex::new(HashMap::new());
}
#[tauri::command]
fn scan_folder_async(path: String) -> ScanSummary {
let files: Vec<ScannedFile> = /* ... scan logic ... */;
let scan_id = Uuid::new_v4().to_string();
SCAN_CACHE.lock().unwrap().insert(scan_id.clone(), files.clone());
ScanSummary {
scan_id,
total_files: files.len(),
by_type: count_by_type(&files),
top_level_folders: extract_folders(&files),
detected_packages: find_packages(&path),
}
}Now the IPC payload is tiny. Maybe 2KB instead of 70MB. The frontend gets everything it needs to show the user, and when they click Import, we just send the scan_id back:
#[tauri::command]
fn import_files(scan_id: String, options: ImportOptions) -> ImportResult {
let files = SCAN_CACHE.lock().unwrap()
.remove(&scan_id)
.ok_or("Scan expired")?;
// Import using the cached files
// ...
}The files never cross the IPC bridge. Rust scans them, Rust stores them, Rust imports them.
Progress Events: The Right Way
With the heavy data staying in Rust, we could use Tauri's event system for what it excels at: lightweight progress updates.
#[tauri::command]
async fn scan_folder_async(path: String, app: AppHandle) -> ScanSummary {
let mut files = Vec::new();
for entry in WalkDir::new(&path) {
if let Ok(e) = entry {
if is_supported(&e) {
files.push(ScannedFile::from(&e));
if files.len() % 1000 == 0 {
app.emit("scan-progress", ScanProgress {
files_found: files.len(),
current_path: e.path().display().to_string(),
})?;
}
}
}
}
// Cache and return summary...
}Now the frontend receives a lightweight event every 1,000 files. Just two fields, maybe 200 bytes. The UI updates smoothly, the user sees "Scanning... 45,000 files found", and Rust keeps churning through the filesystem.
Send to Background
But what if the user does not want to watch a progress bar? What if they want to keep browsing their existing library while the scan runs?
We added a Send to Background button that appears after 2 seconds of scanning. Click it, and the dialog closes, a notification appears in the header with live progress, and when complete, clicking the notification reopens the import dialog with all the options ready.
This required a scan session system. A way to track that a background scan belongs to a specific notification, and when complete, the notification knows how to resume the import flow.
function handleSendToBackground() {
const notifId = createNotification('import', `Scanning ${folderName}`, {
progress: { current: scanProgress, total: scanProgress },
status: 'in_progress',
});
const sessionId = createScanSession(folderPath, notifId);
backgroundScanSessionId = sessionId;
backgroundNotifId = notifId;
importDialogOpen.set(false);
// Original scan continues running...
}The Subtle Race Condition
This is where we learned a painful lesson about async state management.
Our first implementation of Send to Background cancelled the current scan (set SCAN_CANCELLED = true), closed the dialog, and started a new scan in the background. Seemed logical.
But there was a bug. In Rust, every scan starts by resetting the cancellation flag:
fn scan_folder_async(...) {
SCAN_CANCELLED.store(false, Ordering::SeqCst); // Reset flag
// ... scan logic that checks SCAN_CANCELLED ...
}When we started the new background scan, it immediately cleared the cancellation flag before the original scan had checked it. Result: two concurrent scans, both running, both emitting progress events, interleaving their results.
The fix was embarrassingly simple: do not start a new scan. The original scan is already running. Just set up the notification tracking and let it complete.
Results
Before optimisation, 120,000 files caused a 47 second freeze. Users reasonably assumed the app was broken.
After optimisation, 120,000 files scan in 8 seconds with live progress. Users can send to background and keep working. Memory usage dropped by roughly 70MB because the files never hit the JavaScript heap.
Lessons Learned
IPC is not free. Treat the Tauri bridge like a network call. Minimise payload size, batch intelligently, and do not send data you do not need on the frontend.
Keep heavy data where it belongs. If Rust is going to use the data anyway, let Rust hold it. Send summaries and IDs across the bridge, not raw data.
Events are for progress, not data transfer. Tauri's event system is great for lightweight updates. Do not abuse it as a chunked data transfer mechanism.
Async state is tricky. When you have operations that can be cancelled, restarted, or sent to background, trace through every possible interleaving. Race conditions hide in the gaps.
The user's time is sacred. A frozen UI is not just a performance problem. It is a trust problem. If users cannot tell whether your app is working or crashed, they will assume crashed.
The Asset Hoard Team