Static
Tensor
Tensor: typeof Tensor
Static
Parameter
Parameter: typeof Parameter
Static
add
add: ((a: any, b: any) => any)
Static
neg
neg: ((a: any) => any)
Static
mul
mul: ((a: any, b: any) => any)
Static
div
div: ((a: any, b: any) => any)
Static
matmul
matmul: ((a: any, b: any) => any)
Static
exp
exp: ((a: any) => any)
Static
log
log: ((a: any) => any)
Static
sqrt
sqrt: ((a: any) => any)
Static
pow
pow: ((a: any, n: any) => any)
Static
mean
mean: ((a: any, dim?: number, keepdims?: boolean) => any)
Static
masked_fill
masked_fill: ((a: any, mask: any, condition: any, value: any) => any)
Static
variance
variance: ((a: any, dim?: number, keepdims?: boolean) => any)
Static
at
at: ((a: any, idx1: any, idx2: any) => any)
Static
reshape
reshape: ((a: any, shape: any) => any)
Static
_reshape
_reshape: ((a: any, shape: any) => any)
Static
transpose
transpose: ((a: any, dim1: any, dim2: any) => any)
Static
tensor
tensor: ((data: any, requires_grad?: boolean, device?: string) => Tensor)
Static
randint
randint: ((low?: number, high?: number, shape?: number[], requires_grad?: boolean) => Tensor)
Static
randn
randn: ((shape: any, requires_grad?: boolean, device?: string, xavier?: boolean) => Tensor)
Static
rand
rand: ((shape: any, requires_grad?: boolean, device?: string) => Tensor)
Static
tril
tril: ((shape: any, requires_grad?: boolean, device?: string) => Tensor)
Static
ones
ones: ((shape: any, requires_grad?: boolean, device?: string) => Tensor)
Static
zeros
zeros: ((shape: any, requires_grad?: boolean, device?: string) => Tensor)
Static
broadcast
broadcast: ((a: any, b: any) => Tensor)
Static
save
save: ((model: any, file: any) => string)
Static
load
load: ((model: any, loadedData: any) => any)
Static
nn
nn: {
Module: typeof Module;
Linear: typeof Linear;
MultiHeadSelfAttention: typeof MultiHeadSelfAttention;
FullyConnected: typeof FullyConnected;
Block: typeof Block;
Embedding: typeof Embedding;
PositionalEmbedding: typeof PositionalEmbedding;
ReLU: typeof ReLU;
Softmax: typeof Softmax;
Dropout: typeof Dropout;
LayerNorm: typeof LayerNorm;
CrossEntropyLoss: typeof CrossEntropyLoss;
}
Static
optim
optim: {
Adam: typeof Adam;
}
Static
getShape
getShape: ((data: any, shape?: any[]) => any[])
Type declaration
- (data, shape?): any[]
Parameters
- data: any
- shape: any[] = []
Returns any[]
Torch is a neural net matrix multiplication library
torch
Function
tensor(data, requires_grad = false, device = 'cpu') Creates a new Tensor filled with the given data
Function
zeros(*shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with zeros
Function
ones(*shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with ones
Function
tril(*shape, requires_grad = false, device = 'cpu') Creates a new 2D lower triangular Tensor
Function
randn(*shape, requires_grad = false, device = 'cpu', xavier = false) Creates a new Tensor filled with random values from a normal distribution
Function
rand(*shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with random values from a uniform distribution
Function
randint(low, high, *shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with random integers Tensor Methods:
Method
tensor.backward() Performs backpropagation from this tensor backwards
Method
tensor.zero_grad() Clears the gradients stored in this tensor
Method
tensor.zero_grad_graph() Clears the gradients stored in this tensor and all tensors that led to it
Method
tensor.tolist() Returns the tensor's data as a JavaScript Array
Function
add(a, b) Performs element-wise addition of two tensors
Function
sub(a, b) Performs element-wise subtraction of two tensors
Function
neg(a) Returns the element-wise opposite of the given Tensor
Function
mul(a, b) Performs element-wise multiplication of two tensors
Function
div(a, b) Performs element-wise division of two tensors
Function
matmul(a, b) Performs matrix multiplication between two tensors
Function
sum(a, dim, keepdims = false) Gets the sum of the Tensor over a specified dimension
Function
mean(a, dim, keepdims = false) Gets the mean of the Tensor over a specified dimension
Function
variance(a, dim, keepdims = false) Gets the variance of the Tensor over a specified dimension
Function
transpose(a, dim1, dim2) Transposes the tensor along two consecutive dimensions
Function
at(a, index1, index2) Returns elements from the tensor based on given indices
Function
masked_fill(a, condition, value) Fills elements in the tensor based on a condition
Function
pow(a, n) Returns tensor raised to element-wise power
Function
sqrt(a) Returns element-wise square root of the tensor
Function
exp(a) Returns element-wise exponentiation of the tensor
Function
log(a) Returns element-wise natural log of the tensor
torch.nn Neural Network Layers:
Method
nn.Linear(in_size, out_size, device, bias, xavier) Applies a linear transformation to the input tensor
Method
nn.MultiHeadSelfAttention(in_size, out_size, n_heads, n_timesteps, dropout_prob, device) Applies a self-attention layer on the input tensor
Function
nn.FullyConnected(in_size, out_size, dropout_prob, device, bias) Applies a fully-connected layer on the input tensor
Function
nn.Block(in_size, out_size, n_heads, n_timesteps, dropout_prob, device) Applies a transformer Block layer on the input tensor
Function
nn.Embedding(in_size, embed_size) Creates an embedding table for vocabulary
Function
nn.PositionalEmbedding(input_size, embed_size) Creates a positional embedding table
Function
nn.ReLU() Applies Rectified Linear Unit activation function
Function
nn.Softmax() Applies Softmax activation function
Function
nn.Dropout(drop_prob) Applies dropout to input tensor
Function
nn.LayerNorm(n_embed) Applies Layer Normalization to input tensor
Function
nn.CrossEntropyLoss() Computes Cross Entropy Loss between target and input tensor
Optimization: optim.Adam(params, lr, reg, betas, eps) Adam optimizer for updating model parameters
Utility Functions:
Function
save(model, file) Saves the model reruning data blob (for you to save)
Function
load(model, loadedData) Loads the model from saved data
Author
PyTorch Contributors, Leao, E. et al (2022), See also: Brain.js