Building the Rust Backend with Tauri: Domain Structure, Services, and Persistence

Building the Rust Backend with Tauri: Domain Structure, Services, and Persistence

Published
Updated
...Part of Path of Exile 2 Overlord

In Part 1 we covered the high-level architecture of Overlord – why I built it, why Tauri over Electron, the monorepo layout, and the domains that make up the backend. This post is where we actually get into the Rust code. I want to walk through how the backend is structured, how the domains are wired together, how data gets persisted, and how Tauri commands bridge everything to the frontend.

Fair warning: I came into this project with zero Rust experience. I'd read half the Rust book, skimmed a handful of blog posts, and figured "how hard can it be?" Turns out the borrow checker has opinions. But after enough fighting with lifetimes and trait objects, the patterns that emerged are honestly pretty clean.

Tauri 2 project setup

The tauri.conf.json file lives in the backend package and defines your app window, build pipeline, and security settings. The build section is where the monorepo magic happens – beforeDevCommand spins up the Vite dev server, devUrl tells Tauri where to find it, and frontendDist points to the production build output. When you run yarn tauri dev, both halves start together.

The actual entry point is dead simple. main.rs just calls run(), and lib.rs is where the Tauri builder lives. It registers plugins, hooks up all the command handlers, and delegates app initialization to a setup_app function:

pub fn run() {
    tauri::Builder::default()
        .plugin(tauri_plugin_shell::init())
        .plugin(tauri_plugin_process::init())
        .invoke_handler(tauri::generate_handler![
            get_config, update_config,
            create_character, get_character, get_all_characters,
            get_currency_exchange_data, search_currencies,
            // ... and more
        ])
        .setup(|app| setup_app(app))
        .run(tauri::generate_context!())
        .expect("error while running tauri application");
}

That invoke_handler macro is the bridge between Rust and the frontend. Every function listed there becomes callable from TypeScript via invoke("get_all_characters").

The domain file structure

Every domain follows the same layout. Here's what that looks like using the character domain as the example:

domain/character/
  models.rs       # Data structures
  traits.rs       # Async trait interfaces
  service.rs      # Business logic
  repository.rs   # File-based JSON persistence
  commands.rs     # Tauri IPC command handlers

When you have a bunch of domains and you're trying to figure out where economy caching logic lives, or how walkthrough progress gets saved, you just know. It's always service.rs for logic, repository.rs for persistence, commands.rs for IPC.

Models and serialization

Every domain starts with models.rs. These are your data structures – what gets serialized to JSON, sent over IPC, and stored on disk. Rust's derive macros make this straightforward:

#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct CharacterData {
    pub id: String,
    pub name: String,
    pub class: CharacterClass,
    pub current_location: Option<LocationState>,
    pub summary: TrackingSummary,
    // ...
}

Serde's #[serde(flatten)] attribute flattens nested structs into the parent JSON object, keeping the on-disk format clean while keeping the Rust code logically grouped. And #[serde(default)] handles backwards compatibility – if you add a new field and load an old JSON file, serde fills in the default instead of crashing.

Async traits and service interfaces

This is where things get more Rust-specific. Traits serve as interfaces for dependency inversion, but async methods in traits need a little help. The async-trait crate handles the ergonomics:

#[async_trait]
pub trait CharacterService: Send + Sync {
    async fn create_character(&self, name: String, class: CharacterClass)
        -> Result<CharacterDataResponse, AppError>;
    async fn get_all_characters(&self)
        -> Result<Vec<CharacterDataResponse>, AppError>;
    async fn enter_zone(&self, character_id: &str, zone_name: &str)
        -> Result<(), AppError>;
}

The : Send + Sync bound tells the compiler that any type implementing this trait can be safely shared across threads. Since Tauri's command handlers run on different threads and our services get shared via Arc<T>, everything needs those bounds.

The repository trait follows the same pattern. Why bother with traits when there's only one implementation? Testing. With mockall, you can auto-generate mock implementations of any trait, which means service tests don't need actual files on disk.

Dependency injection with Arc

Rust doesn't have a DI container like you'd find in C# or Java. Instead, you do it manually with constructor injection, and the magic ingredient is Arc<T> – an atomically reference-counted smart pointer that lets multiple owners share the same instance across threads.

pub struct CharacterServiceImpl {
    repository: Arc<dyn CharacterRepository + Send + Sync>,
    event_bus: Arc<EventBus>,
    zone_config: Arc<dyn ZoneConfigurationService>,
}
 
impl CharacterServiceImpl {
    pub fn new(
        repository: Arc<dyn CharacterRepository + Send + Sync>,
        event_bus: Arc<EventBus>,
        zone_config: Arc<dyn ZoneConfigurationService>,
    ) -> Self {
        Self { repository, event_bus, zone_config }
    }
}

Every dependency is an Arc<dyn SomeTrait> – a reference-counted pointer to a trait object. The concrete type is erased behind the trait, so the service doesn't know or care whether it's talking to the real repository or a mock.

The service registry

With domains that depend on each other, you can't just create everything in any order. The service registry handles the dependency chain during startup:

pub fn initialize_services(app: &mut tauri::App) -> Result<ServiceInstances, Box<dyn std::error::Error>> {
    // 1. Foundation: no dependencies
    let config_service = Arc::new(ConfigurationServiceImpl::new()?);
    let event_bus = Arc::new(EventBus::new());
 
    // 2. Zone configuration: no dependencies
    let zone_config_service = Arc::new(ZoneConfigurationServiceImpl::new(zone_config_repo));
 
    // 3. Character: depends on event_bus + zone_config
    let character_service = CharacterServiceImpl::with_default_repository(
        event_bus.clone(), zone_config_service.clone(),
    )?;
 
    // 4. Walkthrough: depends on character + event_bus
    let walkthrough_service = Arc::new(WalkthroughServiceImpl::new(
        walkthrough_repo, character_arc.clone(), event_bus.clone(),
    ));
 
    // 5. Log Analysis: depends on almost everything
    let log_analysis_service = LogAnalysisServiceImpl::new(
        character_arc.clone(), walkthrough_service.clone(),
        zone_config_service.clone(), event_bus.clone(),
    )?;
 
    // Register with Tauri's state management
    app.manage(config_service.clone());
    app.manage(event_bus.clone());
    // ...
}

Each service gets wrapped in an Arc and registered with app.manage(). Tauri's state management then makes these available to command handlers via dependency injection.

Error handling

Early in the project I had different error types scattered across domains. That got old fast. Now there's a single AppError enum that covers everything:

#[derive(Error, Debug)]
pub enum AppError {
    #[error("File system error: {message}")]
    FileSystem { message: String },
    #[error("Validation error: {message}")]
    Validation { message: String },
    #[error("Internal error: {message}")]
    Internal { message: String },
    #[error("Network error: {message}")]
    Network { message: String },
}

The thiserror crate generates the trait implementations from those annotations. There are also From implementations that convert standard library errors automatically, so you can use the ? operator freely – any io or serde error converts into an AppError and flows upward naturally.

Tauri commands

Commands are how the frontend talks to the backend. They're just regular Rust functions with a #[tauri::command] attribute:

#[tauri::command]
pub async fn create_character(
    name: String,
    class: CharacterClass,
    character_service: State<'_, Box<dyn CharacterService + Send + Sync>>,
) -> CommandResult<CharacterDataResponse> {
    let result = character_service.create_character(name, class).await;
    to_command_result(result)
}

The function parameters before the service are arguments from the frontend – when TypeScript calls invoke("create_character", { name: "MyChar", class: "Warrior" }), Tauri deserializes those JSON values into the corresponding Rust types automatically.

The State<'_> parameter is Tauri's dependency injection. Remember how we called app.manage() during initialization? Tauri looks at the type signature and injects the matching managed state.

Most commands follow this exact pattern: accept arguments, grab the service from state, call the service method, convert the result. They're intentionally thin – no business logic lives in the command layer.

File-based persistence

The repository layer handles all file I/O. Each character gets its own JSON file and there's an index file that tracks all character IDs and which one is active.

Writes use a temp-file-then-rename pattern to prevent corruption:

async fn write(path: &Path, content: &str) -> AppResult<()> {
    let temp_path = Self::get_temp_path(path);
    fs::write(&temp_path, content).await?;
    fs::rename(&temp_path, path).await?;
    Ok(())
}

If the app crashes mid-write, you either have the old file or the new file – you never get a half-written corrupt JSON file. This was one of those things I added after losing character data to a crash during development. Lesson learned.

The dirs crate handles resolving OS-specific paths. On Linux, your app data goes in ~/.local/share/poe2-overlord/. On macOS it's ~/Library/Application Support/. If you ever need to move your Overlord setup to a new machine, you just copy the data directory. No database exports, no migration scripts.

App setup and lifecycle

The setup_app function ties everything together. It's called once during Tauri's initialization:

pub fn setup_app(app: &mut tauri::App) -> Result<(), Box<dyn std::error::Error>> {
    let services = ServiceInitializer::initialize_services(app)?;
 
    // Start the Tauri event bridge
    let event_bridge = TauriEventBridge::new(services.event_bus.clone(), main_window.clone());
    event_bridge.start_forwarding()?;
 
    // Kick off background tasks
    start_log_monitoring(services.log_analysis_service.clone());
    start_game_process_monitoring(services.game_monitoring_service.clone());
 
    // Register shutdown handler
    main_window.on_window_event(move |event| {
        if let tauri::WindowEvent::CloseRequested { .. } = event {
            services_clone.shutdown_services().await;
        }
    });
 
    Ok(())
}

There's a lifecycle here: services initialize, the event bridge connects domain events to the frontend, background monitoring tasks start, and a shutdown handler ensures tracking data gets finalized when the app closes.

Background tasks are spawned with tauri::async_runtime::spawn, which drops them onto the tokio runtime. They run independently of the main thread and communicate back through the event bus. Part 3 will go deep on the log monitoring task specifically.

What's next

That covers the structural foundation of the Rust backend – the domain pattern, dependency injection, error handling, commands, and persistence. In Part 3, we'll get into the log analysis pipeline: how the app watches Client.txt in real time, parses log lines with regex, detects zone changes and deaths and level-ups, and triggers cascading updates across multiple domains from a single log entry.

If you have questions or want to see the source code, the project page is at Overlord. Thanks for reading and happy coding!

© 2025 David Meents. All Rights Reserved.