Skip to content

Welcome to DataSanta

Open the Deep Learning Mystery: YouTube

Welcome to DataSanta, my digital diary where numbers dance, and algorithms whisper the secrets of the universe. Read more About This Blog

Here you can find me on platforms!

I believe in the power of knowledge, the magic of math, and the art of programming.


MicroTorch - Deep Learning from Scratch!

Implementing deep learning algorithms involves managing data flow in two directions: forward and backward. While the forward pass is typically straightforward, handling the backward pass can be more challenging. As discussed in previous posts, implementing backpropagation requires a strong grasp of calculus, and even minor mistakes can lead to significant issues.

Fortunately, modern frameworks like PyTorch simplify this process with autograd, an automatic differentiation system that dynamically computes gradients during training. This eliminates the need for manually deriving and coding gradient calculations, making development more efficient and less error-prone.

Now, let's build the backbone of such an algorithm - Tensor class!

Autograd cover

Build an autograd!

Classification in Depth – Cross-Entropy & Softmax

Fashion-MNIST is a dataset created by Zalando Research as a drop-in replacement for MNIST. It consists of 70,000 grayscale images (28×28 pixels) categorized into 10 different classes of clothing, such as shirts, sneakers, and coats. Your mission? Train a model to classify these fashion items correctly!

Fashion-MNIST Dataset

Fashion-MNIST Dataset Visualization

SGD, Momentum & Exploding Gradient

Gradient descent is fundamental method in training a deep learning network. It aims to minimize the loss function \(\mathcal{L}\) by updating model parameters in the direction that reduces the loss. By using only batch of the data we can compute the direction of the steepest descent. However, for large networks or more complicated challenges, this algorithm may not be successful! Let's find out why this happens and how we can fix this.

Training Fail

Training Failure: SGD can't classify the spiral pattern

Mastering Neural Network - Linear Layer and SGD

The human brain remains one of the greatest mysteries, far more complex than anything else we know. It is the most complicated object in the universe that we know of. The underlying processes and the source of consciousness, as well as consciousness itself, remain unknown. Neural Nets are good for popularizing Deep Learning algorithms, but we can't say for sure what mechanism behind biological Neural Networks enables intelligence to arise.

Training result

Visualized Boundaries