TypeTensor - v0.1.0
TypeTensor TypeScript's compile-time tensor library - catch shape errors before runtime
Traditional tensor libraries catch shape errors at runtime. TypeTensor catches them at compile time using TypeScript's type system, preventing bugs before your code runs.
Quick Install:
npm install @typetensor/core @typetensor/backend-cpu
Copy
Key Features 🔍 Compile-Time Shape Safety
Tensor shapes are validated at compile time using TypeScript's type system
Incompatible operations are caught before your code runs
IntelliSense shows resulting tensor shapes as you type
🧮 Complete Type System
Full numeric type support: bool
, int8/16/32/64
, uint8/16/32/64
, float32/64
NumPy-compatible type promotion rules
Safe type conversion with overflow/precision loss detection
📐 Broadcasting & Shape Operations
NumPy-compatible broadcasting with compile-time validation
Rich shape manipulation: reshape, transpose, squeeze, unsqueeze
Matrix multiplication with automatic shape inference
🔀 Einops Integration
Tensor rearrangement using Einstein notation
Pattern-based transformations: "h w c -> c h w"
, "batch seq -> batch seq 1"
Compile-time validation of einops patterns
⚡ Pluggable Backends
Modular backend system for different compute targets
CPU, GPU (CUDA, WebGPU, Metal), and WebAssembly support
Zero-copy operations where possible
What Makes TypeTensor Different
Compile-Time Validation : Shape mismatches caught by TypeScript before runtime
Zero Runtime Overhead : Type checking happens entirely at compile time
Familiar API : NumPy-like interface that JavaScript developers expect
Einstein Notation : Tensor transformations with einops patterns
Modular Backends : Use CPU, GPU, or WebAssembly as needed
Architecture TypeTensor is designed with a clear separation of concerns:
┌─────────────────────────────────────┐ │ @ typetensor / core │ ← Type system , shapes , tensor API ├─────────────────────────────────────┤ │ Backend Interface ( Device / Ops ) │ ← Abstract execution layer ├─────────────────────────────────────┤ │ Concrete Backend Packages │ ← Actual computation │ • backend - cpu │ │ • backend - cuda │ │ • backend - webgpu │ │ • backend - metal │ │ • backend - wasm │ └─────────────────────────────────────┘
Copy
Core Design Principles
Type Safety First : Every operation is validated at compile time
Zero Runtime Overhead : Type checking happens entirely at compile time
Familiar API : NumPy-like interface that JavaScript/TypeScript developers expect
Modular Architecture : Use only the backends you need
Mathematical Correctness : Proper handling of broadcasting, type promotion, and numerical precision
Packages
Core Capabilities Tensor Operations
Creation : Tensors from data, zeros, ones, identity matrices
Element-wise : Arithmetic, trigonometric, and exponential functions
Linear Algebra : Matrix multiplication with automatic broadcasting
Reductions : Sum, mean, max, min along specified axes
Shape Manipulation : Reshape, transpose, slice, and permute operations
Advanced Features
Broadcasting : NumPy-compatible shape alignment
Type System : Numeric type support with automatic promotion
Memory Views : Tensor views without data copying
Activation Functions : Softmax, log-softmax
Einstein Notation (Einops) TypeTensor supports tensor transformations using Einstein notation patterns:
Format Conversions : "h w c -> c h w"
(HWC ↔ CHW)
Dimension Manipulation : "h w -> 1 h w"
(add batch dimension)
Flattening : "h w c -> (h w) c"
(combine spatial dimensions)
Splitting : "(h w) c -> h w c"
with explicit dimension sizes
Multi-head Attention : "b s (h d) -> b h s d"
(prepare attention heads)
Current implementation supports basic rearrangement patterns with more advanced features planned.
Getting Started
Development Status TypeTensor is in early development. The core type system and CPU backend are functional, but the project is not yet ready for production use.
Current Status:
✅ Core type system and shape validation
✅ Basic tensor operations (arithmetic, views, reshaping)
✅ Einops-style tensor rearrangement
✅ CPU backend with fundamental operations
🚧 Advanced operations (convolution, pooling, etc.)
🚧 Additional backends (GPU, WebAssembly)
🚧 Performance optimizations
Contributing We welcome contributions! Please see our contribution guidelines for details.
Prior Art & Inspiration TypeTensor builds on ideas from:
PyTorch - Dynamic neural networks and tensor operations
Candle - Rust tensor library (contributor experience with tensor implementations)
Burn - Compile-time shape safety in Rust
ArkType - Type-safe validation (inspiration for einops parsing patterns)
License MIT © Thomas Santerre