Architecture
Technical overview of TensorBloom’s internals for contributors and curious users.
System Overview
┌─────────────────────────────────────────────────────┐
│ Tauri v2 Shell │
│ ┌──────────────────────┐ ┌──────────────────────┐ │
│ │ React Frontend │ │ Python Sidecar │ │
│ │ (Vite + TS) │ │ (PyTorch) │ │
│ │ │ │ │ │
│ │ Graph Editor │◄─JSON-RPC─► Compile │ │
│ │ Property Panel │ over stdio Train │ │
│ │ Code Generator │ Export │ │
│ │ Shape Inference │ Data Scan │ │
│ └──────────────────────┘ └──────────────────────┘ │
│ Rust (window management, sidecar IPC) │
└─────────────────────────────────────────────────────┘
Frontend (app/src/)
React + TypeScript + Vite
The frontend handles all UI rendering. Key frameworks:
- React Flow (xyflow) — Node graph rendering, edge routing, viewport controls
- Zustand — State management (graph, history, theme, training stores)
- Tailwind CSS — Styling
Directory Structure
app/src/
├── components/ # React components
│ ├── GraphEditor.tsx # React Flow canvas, node/edge interaction
│ ├── PropertyPanel.tsx # Right panel — node properties, data config, training
│ ├── TrainingPanel.tsx # Training config form (optimizer, scheduler, etc.)
│ ├── NodePalette.tsx # Left panel — layer browser
│ ├── Toolbar.tsx # Menu bar (File/Edit/Insert/View/Tools)
│ ├── CodePanel.tsx # Bottom panel — generated PyTorch code
│ ├── TrainingCharts.tsx # Training loss/accuracy charts
│ ├── TrainingLogs.tsx # Training log output
│ └── Toast.tsx # Notification system
├── lib/ # Core logic
│ ├── registry.ts # 70+ PyTorch module definitions
│ ├── shape-inference.ts # Client-side dimension propagation
│ ├── decompiler.ts # Graph → PyTorch code generation
│ ├── training.ts # Training pipeline (preflight, graph → sidecar)
│ ├── preflight-check.ts # Validation + auto-fix before training
│ ├── templates.ts # 16 architecture templates
│ ├── project.ts # Save/load .tbloom files
│ ├── auto-layout.ts # Dagre-based graph layout
│ └── dataset-metadata.ts # Dataset definitions, shape resolution
├── nodes/
│ └── TensorBloomNode.tsx # Custom React Flow node component
└── store/
├── graph-store.ts # Graph state (nodes, edges, selection)
├── history-store.ts # Undo/redo snapshots
├── theme-store.ts # Dark/light theme
└── training-store.ts # Training state (status, metrics, config)
State Flow
User action → Zustand store update → React re-render → React Flow update
↓
Shape inference runs → shapeMap updated → Nodes show shapes
↓
Code decompiler runs → Generated PyTorch code updates
Rust Backend (app/src-tauri/)
The Rust layer is thin — it manages the window and the Python sidecar process:
sidecar_manager.rs— Spawns Python process, pipes stdin/stdout, routes JSON-RPC messagespython_env.rs— Auto-provisions a Python venv with PyTorch on first launchlib.rs— Tauri command handlers, GPU detection
Python Sidecar (python/)
A single-file JSON-RPC server that handles all PyTorch operations:
RPC Methods
| Method | Description |
|---|---|
compile | PyTorch code → graph JSON (via torch.fx.symbolic_trace) |
decompile | Graph JSON → PyTorch code |
train.start | Build model from graph, load dataset, train in background thread |
train.stop | Signal training thread to stop |
data.scan | Read tensor file metadata without loading all data |
data.probe | Run one batch to discover output shape/dtype |
data.introspect | Inspect HuggingFace dataset schema |
export.onnx | Export to ONNX format |
export.torchscript | Export to TorchScript |
system.gpu_info | Return CUDA device info |
validate | Check graph for structural errors |
Model Building
The sidecar builds a runnable nn.Module from graph JSON using torch.fx.Graph:
- Topological sort of nodes
- Create
nn.Modulesubmodules for each computation node - Build
torch.fx.Graphwith proper input/output routing - Handle special cases: Add, Concatenate, LSTM (output extraction), TransformerDecoderLayer (self-attention mode)
- Wrap in
torch.fx.GraphModule— ready to train
Training Loop
- Build model from graph
- Detect loss function from graph (Loss node)
- Load dataset (presets, custom tensors, HuggingFace, CSV, ImageFolder)
- Train with gradient clipping, mixed precision, early stopping, checkpointing
- Report progress via JSON-RPC notifications (
train.progress,train.log)
Module Registry
70+ PyTorch modules defined in registry.ts, organized by category:
I/O: Input, Output, Data Linear: Linear, Identity, Group Convolution: Conv1d, Conv2d, Conv3d, ConvTranspose2d Pooling: MaxPool2d, AvgPool2d, AdaptiveAvgPool1d/2d Normalization: BatchNorm1d/2d, LayerNorm, RMSNorm, InstanceNorm Activation: ReLU, GELU, Tanh, Sigmoid, Softmax, LeakyReLU, ELU, etc. Recurrent: LSTM, GRU Transformer: TransformerEncoderLayer, TransformerDecoderLayer, Transformer Dropout: Dropout, Dropout2d Embedding: Embedding Loss: CrossEntropyLoss, MSELoss, BCEWithLogitsLoss, L1Loss, and more Reshape: Flatten, Unflatten, Permute, Squeeze, Unsqueeze, ZeroPad2d Math: Add, Concatenate, MatMul
Each entry defines: type, label, category, description, parameters (with types, defaults, constraints), and input/output handles.