TypeTensor - v0.1.0
    Preparing search index...

    TypeTensor - v0.1.0

    TypeTensor Logo

    TypeTensor

    TypeScript's compile-time tensor library - catch shape errors before runtime


    CI npm version License: MIT TypeScript

    Traditional tensor libraries catch shape errors at runtime. TypeTensor catches them at compile time using TypeScript's type system, preventing bugs before your code runs.

    Quick Install:

    npm install @typetensor/core @typetensor/backend-cpu
    
    • Tensor shapes are validated at compile time using TypeScript's type system
    • Incompatible operations are caught before your code runs
    • IntelliSense shows resulting tensor shapes as you type
    • Full numeric type support: bool, int8/16/32/64, uint8/16/32/64, float32/64
    • NumPy-compatible type promotion rules
    • Safe type conversion with overflow/precision loss detection
    • NumPy-compatible broadcasting with compile-time validation
    • Rich shape manipulation: reshape, transpose, squeeze, unsqueeze
    • Matrix multiplication with automatic shape inference
    • Tensor rearrangement using Einstein notation
    • Pattern-based transformations: "h w c -> c h w", "batch seq -> batch seq 1"
    • Compile-time validation of einops patterns
    • Modular backend system for different compute targets
    • CPU, GPU (CUDA, WebGPU, Metal), and WebAssembly support
    • Zero-copy operations where possible
    • Compile-Time Validation: Shape mismatches caught by TypeScript before runtime
    • Zero Runtime Overhead: Type checking happens entirely at compile time
    • Familiar API: NumPy-like interface that JavaScript developers expect
    • Einstein Notation: Tensor transformations with einops patterns
    • Modular Backends: Use CPU, GPU, or WebAssembly as needed

    TypeTensor is designed with a clear separation of concerns:

    ┌─────────────────────────────────────┐
    │ @typetensor/core │ ← Type system, shapes, tensor API
    ├─────────────────────────────────────┤
    Backend Interface (Device/Ops) │ ← Abstract execution layer
    ├─────────────────────────────────────┤
    Concrete Backend Packages │ ← Actual computation
    │ • backend-cpu
    │ • backend-cuda
    │ • backend-webgpu
    │ • backend-metal
    │ • backend-wasm
    └─────────────────────────────────────┘
    1. Type Safety First: Every operation is validated at compile time
    2. Zero Runtime Overhead: Type checking happens entirely at compile time
    3. Familiar API: NumPy-like interface that JavaScript/TypeScript developers expect
    4. Modular Architecture: Use only the backends you need
    5. Mathematical Correctness: Proper handling of broadcasting, type promotion, and numerical precision
    Package NPM Status Description
    @typetensor/core npm Alpha Core tensor operations and type system
    @typetensor/backend-cpu npm Alpha CPU backend implementation
    @typetensor/backend-cuda npm 🚧 TODO NVIDIA CUDA GPU backend
    @typetensor/backend-webgpu npm 🚧 TODO WebGPU backend for browsers
    @typetensor/backend-metal npm 🚧 TODO Apple Metal GPU backend
    @typetensor/backend-vulkan npm 🚧 TODO Vulkan GPU backend
    @typetensor/backend-wasm npm 🚧 TODO WebAssembly backend
    • Creation: Tensors from data, zeros, ones, identity matrices
    • Element-wise: Arithmetic, trigonometric, and exponential functions
    • Linear Algebra: Matrix multiplication with automatic broadcasting
    • Reductions: Sum, mean, max, min along specified axes
    • Shape Manipulation: Reshape, transpose, slice, and permute operations
    • Broadcasting: NumPy-compatible shape alignment
    • Type System: Numeric type support with automatic promotion
    • Memory Views: Tensor views without data copying
    • Activation Functions: Softmax, log-softmax

    TypeTensor supports tensor transformations using Einstein notation patterns:

    • Format Conversions: "h w c -> c h w" (HWC ↔ CHW)
    • Dimension Manipulation: "h w -> 1 h w" (add batch dimension)
    • Flattening: "h w c -> (h w) c" (combine spatial dimensions)
    • Splitting: "(h w) c -> h w c" with explicit dimension sizes
    • Multi-head Attention: "b s (h d) -> b h s d" (prepare attention heads)

    Current implementation supports basic rearrangement patterns with more advanced features planned.

    TypeTensor is in early development. The core type system and CPU backend are functional, but the project is not yet ready for production use.

    Current Status:

    • ✅ Core type system and shape validation
    • ✅ Basic tensor operations (arithmetic, views, reshaping)
    • ✅ Einops-style tensor rearrangement
    • ✅ CPU backend with fundamental operations
    • 🚧 Advanced operations (convolution, pooling, etc.)
    • 🚧 Additional backends (GPU, WebAssembly)
    • 🚧 Performance optimizations

    We welcome contributions! Please see our contribution guidelines for details.

    TypeTensor builds on ideas from:

    • PyTorch - Dynamic neural networks and tensor operations
    • Candle - Rust tensor library (contributor experience with tensor implementations)
    • Burn - Compile-time shape safety in Rust
    • ArkType - Type-safe validation (inspiration for einops parsing patterns)

    MIT © Thomas Santerre