Testing a Tauri App: Testing Across Rust and TypeScript

Testing a Tauri App: Testing Across Rust and TypeScript

Published
Updated
...Part of Path of Exile 2 Overlord

This is the last one. Over the course of this series we've walked through the architecture, the Rust backend, the log parsing pipeline, the event-driven communication layer, the React frontend, and the economy and walkthrough features end to end. But I haven't talked about how any of it gets tested.

For an app with this many moving parts – two runtimes, an IPC bridge, file-based persistence, real-time event streams, external API integration – testing is the thing that lets me refactor without holding my breath. Let me walk through the testing strategy on both sides.

The frontend testing stack

The frontend uses Vitest as the test runner, React Testing Library for rendering components, and jsdom as the browser environment. Every component has a co-located .spec.tsx file right next to the component source.

The setup file: mocking Tauri

This is the most important file in the entire test setup. Every frontend component that calls invoke(), listens for events, or interacts with the window API would crash in jsdom without these mocks:

// src/test/setup.ts
import '@testing-library/jest-dom/vitest';
import { cleanup } from '@testing-library/react';
import { afterEach, vi } from 'vitest';
 
afterEach(() => cleanup());
 
vi.mock('@tauri-apps/api/core', () => ({
  invoke: vi.fn(),
}));
 
vi.mock('@tauri-apps/api/event', () => ({
  listen: vi.fn(() => Promise.resolve(() => {})),
  emit: vi.fn(),
}));
 
vi.mock('@tauri-apps/plugin-window', () => ({
  getCurrentWindow: vi.fn(() => ({
    close: vi.fn(),
    minimize: vi.fn(),
    toggleMaximize: vi.fn(),
  })),
}));

Three mocks and they cover everything. The invoke mock handles all Tauri command calls. The listen mock returns a no-op unlisten function. And getCurrentWindow handles the custom window title bar buttons.

I didn't set this up on day one. My first attempt at testing hit a wall of "Cannot find module @tauri-apps/api/core" errors, and I went down a rabbit hole trying to mock individual imports per test file. Eventually I realized a global setup file was the only sane approach. The lesson: mock Tauri early, do it globally.

What a component test looks like

Here's a condensed version of the CharacterCard spec:

const mockCharacter: CharacterData = {
  id: 'test-id',
  name: 'TestCharacter',
  class: 'Warrior',
  level: 50,
  league: 'Standard',
  // ...
};
 
describe('CharacterCard', () => {
  const defaultProps = {
    character: mockCharacter,
    isActive: false,
    onSelect: vi.fn(),
    onEdit: vi.fn(),
  };
 
  beforeEach(() => vi.clearAllMocks());
 
  it('renders character name', () => {
    render(<CharacterCard {...defaultProps} />);
    expect(screen.getByText('TestCharacter')).toBeInTheDocument();
  });
 
  it('calls onSelect when card is clicked', async () => {
    const user = userEvent.setup();
    render(<CharacterCard {...defaultProps} />);
    await user.click(screen.getByText('TestCharacter'));
    expect(defaultProps.onSelect).toHaveBeenCalledTimes(1);
  });
});

A few patterns to notice. Mock data is defined at the top as a complete, valid object. defaultProps gives every test a baseline with vi.fn() callbacks. And the queries are mostly screen.getByText and screen.getByRole – we rarely use getByTestId. Testing Library's philosophy is to query the DOM the way a user would.

Form testing with userEvent

Form components need a different approach. React Testing Library's userEvent library handles simulating typing and clicking:

it('calls onSubmit with form data when submitted', async () => {
  const user = userEvent.setup();
  const handleSubmit = vi.fn();
 
  render(<CharacterFormModal {...defaultProps} onSubmit={handleSubmit} />);
 
  await user.type(screen.getByPlaceholderText('Enter character name'), 'NewCharacter');
  await user.click(screen.getByRole('button', { name: 'Create Character' }));
 
  expect(handleSubmit).toHaveBeenCalledWith(
    expect.objectContaining({ name: 'NewCharacter' })
  );
});

Every interaction with userEvent is async – you have to await the calls. I forgot the await on a few tests early on and spent a while confused about why assertions were failing. The events hadn't finished propagating yet.

Mocking context dependencies

Components that use context hooks need those contexts mocked:

vi.mock('@/contexts/CharacterContext', () => ({
  useCharacter: () => ({ activeCharacter: null }),
}));

The vi.mock calls go at the top of the file. For mocks that need different values in different tests, use vi.hoisted() to create a mock function accessible both inside and outside the mock.

The Rust side: service tests with mocks

On the backend, tests are built into the language. You write a #[cfg(test)] module and add #[test] annotations. For async tests, we use #[tokio::test].

Parser tests are the simplest – give a string, check what comes back:

#[test]
fn test_parse_line_extracts_zone_name() {
    let parser = SceneChangeParser::new();
    let line = "2026/01/11 10:30:45 12345 abc [INFO Client 1234] [SCENE] Set Source [The Coast]";
 
    let result = parser.parse_line(line);
    assert!(result.is_ok());
 
    match result.unwrap() {
        ParserResult::SceneChange(zone_name) => assert_eq!(zone_name, "The Coast"),
        _ => panic!("Expected SceneChange result"),
    }
}

These are valuable because the parsers process every single line from Client.txt. If a regex is wrong, it affects everything downstream.

Service tests use mock repositories. A service like WalkthroughServiceImpl depends on a repository and a character service. In tests, we replace those with in-memory mocks:

#[tokio::test]
async fn test_handle_scene_change_advances_step() {
    let guide = create_test_guide();
    let character = create_character_at_step("char-1", "act_1_step_1");
    let character_service = Arc::new(MockCharacterService::with_character(character));
 
    let service = create_test_service(repository, character_service.clone());
 
    service.handle_scene_change("char-1", "The Coast").await.unwrap();
 
    let characters = character_service.characters.read().await;
    let updated = characters.get("char-1").unwrap();
    assert_eq!(updated.walkthrough_progress.current_step_id, Some("act_1_step_2".to_string()));
}

The trait-based DI pattern from Part 2 makes this possible. Because the service takes Arc<dyn WalkthroughRepository> instead of a concrete type, swapping in a mock is trivial.

Concurrency tests

The economy service tests include concurrency tests for the semaphore deduplication:

#[tokio::test]
async fn test_concurrent_requests_no_deadlock() {
    let service = Arc::new(EconomyService::new());
 
    let handle1 = tokio::spawn({
        let service = service.clone();
        async move { service.fetch_currency_exchange_data("TestLeague", false, EconomyType::Currency).await }
    });
 
    let handle2 = tokio::spawn({
        let service = service.clone();
        async move { service.fetch_currency_exchange_data("TestLeague", false, EconomyType::Currency).await }
    });
 
    let timeout_result = tokio::time::timeout(
        Duration::from_secs(15),
        async { tokio::join!(handle1, handle2) }
    ).await;
 
    assert!(timeout_result.is_ok(), "Concurrent requests should complete without deadlock");
}

Two concurrent requests for the same cache key, wrapped in a timeout. If the semaphore logic has a deadlock, the test hangs and the timeout fires.

What coverage actually means

Let me be straight about what having tests means and doesn't mean.

What it means: every component has a spec file, every spec renders the component and exercises its primary functionality. If someone breaks a component's rendering, a test fails. That safety net is real.

What it doesn't mean: we've tested every possible state and edge case. Component tests are great at verifying "does this render the right stuff given this input?" They're less great at verifying "does the whole app work correctly when data flows through nine steps from a log line to a UI update?"

Those event-driven flows span the boundary between Rust and TypeScript. You can test each side in isolation, but testing the round trip requires the full Tauri runtime.

There are also bugs that component tests don't catch. Race conditions in event handlers, stale closures that only manifest after multiple rapid state updates, subtle differences between jsdom and a real WebKit webview. The stale closure bug I talked about in Part 4 was found by using the app, not by running tests.

Lessons I'd pass along

Mock Tauri globally from day one. Don't wait until you have a bunch of components and then try to retrofit mocks.

Always await userEvent calls. They're async. If you forget the await, assertions run before events have propagated.

Prefer getByRole and getByLabelText over getByTestId. If you can find an element the way a user would, your tests are more meaningful and resilient.

Use factories for mock data. The first time you change a type and have to update mock objects everywhere, you'll wish you had createMockZone(overrides).

Trait-based DI on the backend isn't just architecture – it's testability. The whole reason service tests can run without a real filesystem is that the repository is behind a trait.

Wrapping up the series

This has been seven posts covering the full stack of a Tauri desktop application – from architecture decisions and Rust domain structure, through log parsing and event-driven communication, to the React frontend and full-stack feature walkthroughs.

If there's one theme I'd pull from all seven parts, it's that consistency pays compound interest. Consistent domain structure means you know where to find things. Consistent trait interfaces mean you can swap in mocks. Consistent test patterns mean you can write new tests quickly. None of these things are exciting on their own, but together they're what let a solo developer maintain a project across two runtimes without losing track of what's where.

Overlord is far from finished – there's more to build, more to test, and more to learn. But I hope this series gave you a useful look at what building a real Tauri app looks like in practice, with the tradeoffs and mistakes included.

Thanks for sticking with me through all seven parts, and happy coding!

© 2025 David Meents. All Rights Reserved.