← Docs

Architecture

· 3 min read

Technical overview of TensorBloom’s internals for contributors and curious users.

System Overview

┌─────────────────────────────────────────────────────┐
│                   Tauri v2 Shell                     │
│  ┌──────────────────────┐  ┌──────────────────────┐  │
│  │   React Frontend     │  │   Python Sidecar     │  │
│  │   (Vite + TS)        │  │   (PyTorch)          │  │
│  │                      │  │                      │  │
│  │  Graph Editor        │◄─JSON-RPC─►  Compile    │  │
│  │  Property Panel      │  over stdio   Train     │  │
│  │  Code Generator      │              Export     │  │
│  │  Shape Inference     │              Data Scan  │  │
│  └──────────────────────┘  └──────────────────────┘  │
│              Rust (window management, sidecar IPC)    │
└─────────────────────────────────────────────────────┘

Frontend (app/src/)

React + TypeScript + Vite

The frontend handles all UI rendering. Key frameworks:

  • React Flow (xyflow) — Node graph rendering, edge routing, viewport controls
  • Zustand — State management (graph, history, theme, training stores)
  • Tailwind CSS — Styling

Directory Structure

app/src/
├── components/          # React components
│   ├── GraphEditor.tsx  # React Flow canvas, node/edge interaction
│   ├── PropertyPanel.tsx # Right panel — node properties, data config, training
│   ├── TrainingPanel.tsx # Training config form (optimizer, scheduler, etc.)
│   ├── NodePalette.tsx  # Left panel — layer browser
│   ├── Toolbar.tsx      # Menu bar (File/Edit/Insert/View/Tools)
│   ├── CodePanel.tsx    # Bottom panel — generated PyTorch code
│   ├── TrainingCharts.tsx # Training loss/accuracy charts
│   ├── TrainingLogs.tsx # Training log output
│   └── Toast.tsx        # Notification system
├── lib/                 # Core logic
│   ├── registry.ts      # 70+ PyTorch module definitions
│   ├── shape-inference.ts # Client-side dimension propagation
│   ├── decompiler.ts    # Graph → PyTorch code generation
│   ├── training.ts      # Training pipeline (preflight, graph → sidecar)
│   ├── preflight-check.ts # Validation + auto-fix before training
│   ├── templates.ts     # 16 architecture templates
│   ├── project.ts       # Save/load .tbloom files
│   ├── auto-layout.ts   # Dagre-based graph layout
│   └── dataset-metadata.ts # Dataset definitions, shape resolution
├── nodes/
│   └── TensorBloomNode.tsx # Custom React Flow node component
└── store/
    ├── graph-store.ts   # Graph state (nodes, edges, selection)
    ├── history-store.ts # Undo/redo snapshots
    ├── theme-store.ts   # Dark/light theme
    └── training-store.ts # Training state (status, metrics, config)

State Flow

User action → Zustand store update → React re-render → React Flow update

            Shape inference runs → shapeMap updated → Nodes show shapes

            Code decompiler runs → Generated PyTorch code updates

Rust Backend (app/src-tauri/)

The Rust layer is thin — it manages the window and the Python sidecar process:

  • sidecar_manager.rs — Spawns Python process, pipes stdin/stdout, routes JSON-RPC messages
  • python_env.rs — Auto-provisions a Python venv with PyTorch on first launch
  • lib.rs — Tauri command handlers, GPU detection

Python Sidecar (python/)

A single-file JSON-RPC server that handles all PyTorch operations:

RPC Methods

MethodDescription
compilePyTorch code → graph JSON (via torch.fx.symbolic_trace)
decompileGraph JSON → PyTorch code
train.startBuild model from graph, load dataset, train in background thread
train.stopSignal training thread to stop
data.scanRead tensor file metadata without loading all data
data.probeRun one batch to discover output shape/dtype
data.introspectInspect HuggingFace dataset schema
export.onnxExport to ONNX format
export.torchscriptExport to TorchScript
system.gpu_infoReturn CUDA device info
validateCheck graph for structural errors

Model Building

The sidecar builds a runnable nn.Module from graph JSON using torch.fx.Graph:

  1. Topological sort of nodes
  2. Create nn.Module submodules for each computation node
  3. Build torch.fx.Graph with proper input/output routing
  4. Handle special cases: Add, Concatenate, LSTM (output extraction), TransformerDecoderLayer (self-attention mode)
  5. Wrap in torch.fx.GraphModule — ready to train

Training Loop

  1. Build model from graph
  2. Detect loss function from graph (Loss node)
  3. Load dataset (presets, custom tensors, HuggingFace, CSV, ImageFolder)
  4. Train with gradient clipping, mixed precision, early stopping, checkpointing
  5. Report progress via JSON-RPC notifications (train.progress, train.log)

Module Registry

70+ PyTorch modules defined in registry.ts, organized by category:

I/O: Input, Output, Data Linear: Linear, Identity, Group Convolution: Conv1d, Conv2d, Conv3d, ConvTranspose2d Pooling: MaxPool2d, AvgPool2d, AdaptiveAvgPool1d/2d Normalization: BatchNorm1d/2d, LayerNorm, RMSNorm, InstanceNorm Activation: ReLU, GELU, Tanh, Sigmoid, Softmax, LeakyReLU, ELU, etc. Recurrent: LSTM, GRU Transformer: TransformerEncoderLayer, TransformerDecoderLayer, Transformer Dropout: Dropout, Dropout2d Embedding: Embedding Loss: CrossEntropyLoss, MSELoss, BCEWithLogitsLoss, L1Loss, and more Reshape: Flatten, Unflatten, Permute, Squeeze, Unsqueeze, ZeroPad2d Math: Add, Concatenate, MatMul

Each entry defines: type, label, category, description, parameters (with types, defaults, constraints), and input/output handles.