Module Reference
TensorBloom ships with a registry of over 70 PyTorch modules, organized into 14 categories. Each module maps directly to its PyTorch counterpart — same parameters, same behavior, rendered as a configurable node in the graph editor.
I/O
- Input — Entry point for tensor data. Configure the input shape to match your dataset.
- Output — Terminal node that defines the model’s output.
Data
Built-in dataset nodes that handle downloading, preprocessing, and batching:
- MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100
- TinyShakespeare, WikiText-2, IMDB, AG News
- SpeechCommands
- ImageFolder (local image directories)
- HuggingFace (experimental)
- Custom CSV
- Custom Tensors (load your own
.pt,.npz,.safetensorsfiles)
Linear
- Linear — Fully connected layer (
nn.Linear). Setin_features,out_features, and optional bias. - Identity — Pass-through layer, useful for skip connections.
Convolution
- Conv1d / Conv2d / Conv3d — Standard convolutions for 1D, 2D, and 3D data.
- ConvTranspose2d — Transposed (deconvolution) for upsampling in generators and decoders.
Pooling
- MaxPool1d / MaxPool2d — Max pooling with configurable kernel size and stride.
- AvgPool1d / AvgPool2d — Average pooling.
- AdaptiveAvgPool1d / AdaptiveAvgPool2d — Output size-based pooling.
Normalization
- BatchNorm1d / BatchNorm2d / BatchNorm3d — Batch normalization.
- LayerNorm — Layer normalization, common in transformers.
- RMSNorm — Root mean square normalization.
- GroupNorm — Group normalization.
Activation
- ReLU / LeakyReLU / PReLU — Rectified linear units.
- GELU / SiLU — Smooth activations used in modern architectures.
- Sigmoid / Tanh — Classic bounded activations.
- Softmax / LogSoftmax — Output normalization for classification.
Recurrent
- LSTM — Long Short-Term Memory with configurable layers, hidden size, and bidirectionality.
- GRU — Gated Recurrent Unit.
- RNN — Vanilla recurrent network.
Transformer
- MultiheadAttention — Scaled dot-product attention with configurable heads.
- TransformerEncoderLayer — Full encoder layer (attention + feedforward + norm).
- TransformerDecoderLayer — Full decoder layer with cross-attention.
Dropout
- Dropout — Standard dropout with configurable probability.
- AlphaDropout — Self-normalizing dropout for SELU networks.
Embedding
- Embedding — Learnable lookup table for token indices. Set
num_embeddingsandembedding_dim.
Loss
13 loss functions available as terminal nodes:
- CrossEntropyLoss, NLLLoss, MSELoss, L1Loss
- BCELoss, BCEWithLogitsLoss
- HuberLoss, SmoothL1Loss
- KLDivLoss, CosineEmbeddingLoss
- TripletMarginLoss, HingeEmbeddingLoss
- CTCLoss
Reshape
- Flatten — Collapse dimensions for fully connected layers.
- Reshape / View — Arbitrary shape transforms.
- Permute — Reorder dimensions.
- Squeeze / Unsqueeze — Remove or add dimensions.
- Transpose — Swap two dimensions.
Math
- Add / Multiply — Element-wise operations for residual connections and gating.
- Concatenate — Join tensors along a specified dimension.