TORCH: Tensor Operations with the Reasoning Capacity of Humans
Torch is a powerful library for tensor computations and deep learning, offering
a comprehensive set of tools for creating and manipulating multidimensional arrays.
It provides a wide range of mathematical operations, and it includes a neural network module (torch.nn) that facilitates
the construction of complex neural architectures through a modular approach, with
various layer types and activation functions readily available. Torch also
implements automatic differentiation, enabling efficient gradient computation for
training neural networks, and offers optimization algorithms like Adam for parameter
updates. Additionally, it includes utilities for saving and loading models, making
it a versatile and complete framework for developing and deploying machine learning
solutions.
Torch is a neural net matrix multiplication library that
uses PyTorch API syntax
for tensors and neural nets.
Uses GPU.js acceleration
to translate matmul into WebGL shader code. GPU.js does matmul faster
than PyTorch.
1. Tensor Creation:
- `tensor()`: Creates a new Tensor filled with given data
- `zeros()`: Creates a new Tensor filled with zeros
- `ones()`: Creates a new Tensor filled with ones
- `randn()`: Creates a new Tensor filled with random values from a normal distribution
- `rand()`: Creates a new Tensor filled with random values from a uniform distribution
2. Tensor Properties and Methods:
- `backward()`: Performs backpropagation from this tensor backwards
- `zero_grad()`: Clears the gradients stored in this tensor
- `tolist()`: Returns the tensor's data as a JavaScript Array
- Properties: `data`, `length`, `ndims`, `grad`
3. Basic Arithmetic Operations:
- `add()`, `sub()`, `mul()`, `div()`: Element-wise arithmetic operations
- `matmul()`: Matrix multiplication between two tensors
- `pow()`: Element-wise power operation
4. Statistical Operations:
- `sum()`: Gets the sum of the Tensor over a specified dimension
- `mean()`: Gets the mean of the Tensor over a specified dimension
- `variance()`: Gets the variance of the Tensor over a specified dimension
5. Tensor Manipulation:
- `transpose()`: Transposes the tensor along two consecutive dimensions
- `at()`: Returns elements from the tensor based on given indices
- `masked_fill()`: Fills elements in the tensor based on a condition
6. Mathematical Functions:
- `sqrt()`: Element-wise square root
- `exp()`: Element-wise exponentiation
- `log()`: Element-wise natural logarithm
7. Neural Network Layers (torch.nn):
- `Linear()`: Applies a linear transformation
- `MultiHeadSelfAttention()`: Applies a self-attention layer
- `Embedding()`: Creates an embedding table for vocabulary
- Activation functions: `ReLU()`, `Softmax()`
8. Optimization and Loss:
- `optim.Adam()`: Adam optimizer for updating model parameters
- `nn.CrossEntropyLoss()`: Computes Cross Entropy Loss
torch
Function
tensor(data, requires_grad = false, device = 'cpu') Creates a new Tensor filled with the given data
Function
zeros(*shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with zeros
Function
ones(*shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with ones
Function
tril(*shape, requires_grad = false, device = 'cpu') Creates a new 2D lower triangular Tensor
Function
randn(*shape, requires_grad = false, device = 'cpu', xavier = false) Creates a new Tensor filled with random values from a normal distribution
Function
rand(*shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with random values from a uniform distribution
Function
randint(low, high, *shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with random integers
Tensor Methods:
Method
tensor.backward() Performs backpropagation from this tensor backwards
Method
tensor.zero_grad() Clears the gradients stored in this tensor
Method
tensor.zero_grad_graph() Clears the gradients stored in this tensor and all tensors that led to it
Method
tensor.tolist() Returns the tensor's data as a JavaScript Array
Function
add(a, b) Performs element-wise addition of two tensors
Function
sub(a, b) Performs element-wise subtraction of two tensors
Function
neg(a) Returns the element-wise opposite of the given Tensor
Function
mul(a, b) Performs element-wise multiplication of two tensors
Function
div(a, b) Performs element-wise division of two tensors
Function
matmul(a, b) Performs matrix multiplication between two tensors
Function
sum(a, dim, keepdims = false) Gets the sum of the Tensor over a specified dimension
Function
mean(a, dim, keepdims = false) Gets the mean of the Tensor over a specified dimension
Function
variance(a, dim, keepdims = false) Gets the variance of the Tensor over a specified dimension
Function
transpose(a, dim1, dim2) Transposes the tensor along two consecutive dimensions
Function
at(a, index1, index2) Returns elements from the tensor based on given indices
Function
masked_fill(a, condition, value) Fills elements in the tensor based on a condition
Function
pow(a, n) Returns tensor raised to element-wise power
Function
sqrt(a) Returns element-wise square root of the tensor
Function
exp(a) Returns element-wise exponentiation of the tensor
Function
log(a) Returns element-wise natural log of the tensor
torch.nn
Neural Network Layers:
Method
nn.Linear(in_size, out_size, device, bias, xavier) Applies a linear transformation to the input tensor
Method
nn.MultiHeadSelfAttention(in_size, out_size, n_heads, n_timesteps, dropout_prob, device) Applies a self-attention layer on the input tensor
Function
nn.FullyConnected(in_size, out_size, dropout_prob, device, bias) Applies a fully-connected layer on the input tensor
Function
nn.Block(in_size, out_size, n_heads, n_timesteps, dropout_prob, device) Applies a transformer Block layer on the input tensor
Function
nn.Embedding(in_size, embed_size) Creates an embedding table for vocabulary
Function
nn.PositionalEmbedding(input_size, embed_size) Creates a positional embedding table
Function
nn.ReLU() Applies Rectified Linear Unit activation function
Function
nn.Softmax() Applies Softmax activation function
Function
nn.Dropout(drop_prob) Applies dropout to input tensor
Function
nn.LayerNorm(n_embed) Applies Layer Normalization to input tensor
Function
nn.CrossEntropyLoss() Computes Cross Entropy Loss between target and input tensor
Optimization:
optim.Adam(params, lr, reg, betas, eps) Adam optimizer for updating model parameters
Utility Functions:
Function
save(model, file) Saves the model reruning data blob (for you to save)
Function
load(model, loadedData) Loads the model from saved data
TORCH: Tensor Operations with the Reasoning Capacity of Humans
Torch is a powerful library for tensor computations and deep learning, offering a comprehensive set of tools for creating and manipulating multidimensional arrays. It provides a wide range of mathematical operations, and it includes a neural network module (torch.nn) that facilitates the construction of complex neural architectures through a modular approach, with various layer types and activation functions readily available. Torch also implements automatic differentiation, enabling efficient gradient computation for training neural networks, and offers optimization algorithms like Adam for parameter updates. Additionally, it includes utilities for saving and loading models, making it a versatile and complete framework for developing and deploying machine learning solutions.
torch
Function
tensor(data, requires_grad = false, device = 'cpu') Creates a new Tensor filled with the given data
Function
zeros(*shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with zeros
Function
ones(*shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with ones
Function
tril(*shape, requires_grad = false, device = 'cpu') Creates a new 2D lower triangular Tensor
Function
randn(*shape, requires_grad = false, device = 'cpu', xavier = false) Creates a new Tensor filled with random values from a normal distribution
Function
rand(*shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with random values from a uniform distribution
Function
randint(low, high, *shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with random integers Tensor Methods:
Method
tensor.backward() Performs backpropagation from this tensor backwards
Method
tensor.zero_grad() Clears the gradients stored in this tensor
Method
tensor.zero_grad_graph() Clears the gradients stored in this tensor and all tensors that led to it
Method
tensor.tolist() Returns the tensor's data as a JavaScript Array
Function
add(a, b) Performs element-wise addition of two tensors
Function
sub(a, b) Performs element-wise subtraction of two tensors
Function
neg(a) Returns the element-wise opposite of the given Tensor
Function
mul(a, b) Performs element-wise multiplication of two tensors
Function
div(a, b) Performs element-wise division of two tensors
Function
matmul(a, b) Performs matrix multiplication between two tensors
Function
sum(a, dim, keepdims = false) Gets the sum of the Tensor over a specified dimension
Function
mean(a, dim, keepdims = false) Gets the mean of the Tensor over a specified dimension
Function
variance(a, dim, keepdims = false) Gets the variance of the Tensor over a specified dimension
Function
transpose(a, dim1, dim2) Transposes the tensor along two consecutive dimensions
Function
at(a, index1, index2) Returns elements from the tensor based on given indices
Function
masked_fill(a, condition, value) Fills elements in the tensor based on a condition
Function
pow(a, n) Returns tensor raised to element-wise power
Function
sqrt(a) Returns element-wise square root of the tensor
Function
exp(a) Returns element-wise exponentiation of the tensor
Function
log(a) Returns element-wise natural log of the tensor
torch.nn Neural Network Layers:
Method
nn.Linear(in_size, out_size, device, bias, xavier) Applies a linear transformation to the input tensor
Method
nn.MultiHeadSelfAttention(in_size, out_size, n_heads, n_timesteps, dropout_prob, device) Applies a self-attention layer on the input tensor
Function
nn.FullyConnected(in_size, out_size, dropout_prob, device, bias) Applies a fully-connected layer on the input tensor
Function
nn.Block(in_size, out_size, n_heads, n_timesteps, dropout_prob, device) Applies a transformer Block layer on the input tensor
Function
nn.Embedding(in_size, embed_size) Creates an embedding table for vocabulary
Function
nn.PositionalEmbedding(input_size, embed_size) Creates a positional embedding table
Function
nn.ReLU() Applies Rectified Linear Unit activation function
Function
nn.Softmax() Applies Softmax activation function
Function
nn.Dropout(drop_prob) Applies dropout to input tensor
Function
nn.LayerNorm(n_embed) Applies Layer Normalization to input tensor
Function
nn.CrossEntropyLoss() Computes Cross Entropy Loss between target and input tensor
Optimization: optim.Adam(params, lr, reg, betas, eps) Adam optimizer for updating model parameters
Utility Functions:
Function
save(model, file) Saves the model reruning data blob (for you to save)
Function
load(model, loadedData) Loads the model from saved data
Author
PyTorch Contributors, Leao, E. et al (2022), See also: Brain.js