| ID | Title |
|---|---|
| 22 | Sigmoid |
| 222 | tanh |
| 23 | Softmax |
| 39 | Log Softmax |
| 42 | ReLU |
| 44 | Leaky ReLU |
| 96 | Hard Sigmoid |
| 97 | ELU |
| 98 | PReLU |
| 99 | Softplus |
| 100 | Softsign |
| 102 | Swish |
| 103 | SELU |
| ID | Title |
|---|---|
| 24 | Single neuron |
| 25 | Single neuron with backpropagation |
| 26 | Autograd operations |
| 40 | Dense layer |
| 41 | Convolutional 2D layer |
| 87 | Adam (Adaptive Moment Estimation) optimizer single step |
| 49 | Adam optimization to minimize an objective function |
| 89 | Self-Attention simplified (Pattern Weaver's code) |
| 53 | Self-Attention mechanism |
| 54 | RNN (Recurrent neural networks) |
| 62 | RNN backpropagation |
| 56 | KL Divergence between two normal distributions |
| 59 | LSTM (Long Short-Term Memory) network |
| 85 | Positional encoding calculator |
| 94 | Multi-Head attention |
| 107 | Masked self-attention |
| ID | Title |
|---|---|
| 1 | Matrix times vector |
| 2 | Matrix transpose |
| 3 | Matrix reshape |
| 4 | Matrix mean |
| 5 | Matrix scalar multiplication |
| 6 | Matrix eigenvalues |
| 7 | Matrix transformation |
| 8 | Matrix inverse |
| 9 | Matrix times matrix |
| 12 | Matrix SVD using direct method |
| 13 | Matrix determinant Laplace expansion |
| 27 | Transformation matrix from basis to another basis |
| 28 | Matrix SVD using eigen vectors |
| 35 | Vector to diagonal matrix |
| 37 | Correlation matrix |
| 48 | Matrix RREF (reduced row echelon form) |
| 55 | Translation matrix |
| 11 | Linear systems using Jacobi method |
| 57 | Linear systems using Gauss-Seidel |
| 58 | Linear systems using Gaussian elimination |
| 63 | Linear systems using conjugate gradient |
| 65 | Compressed row sparse matrix (CSR) |
| 67 | Compressed column sparse matrix (CSC) |
| 68 | Matrix column space using RREF |
| 74 | Composite hypervector |
| 66 | Vector projection onto another vector |
| 76 | Vector cosine similarity |
| 83 | Vector dot product |
| 84 | Phi transform |