MyTorch: Custom Deep Learning Library

Project Overview

Built a comprehensive deep learning library from scratch using only Python and NumPy, replicating the core functionality of PyTorch. This educational project demonstrates a deep understanding of the fundamental mechanics behind modern deep learning frameworks.

Implementation Details

  • Constructed a complete Autograd engine with forward and backward propagation capabilities
  • Developed essential loss functions for various learning tasks
  • Implemented multiple optimization algorithms for neural network training
  • Created diverse neural network layers:
    • Linear layers for basic transformations
    • Convolutional layers for spatial data processing
    • Recurrent layers for sequential data
    • BatchNorm2D for training stability
    • MeanPool2D and MaxPool2D for dimension reduction
    • Sequence packing for efficient batch processing

Models Implemented

Using the custom library components, successfully built and trained a variety of neural network architectures:

  • Multi-Layer Perceptrons (MLPs)
  • Convolutional Neural Networks (CNNs)
  • Long Short-Term Memory networks (LSTMs)
  • Deep Neural Networks (DNNs)
  • Generative Adversarial Networks (GANs)
  • Graph Neural Networks (GNNs)
  • Gated Recurrent Units (GRUs)

Technologies Used

  • Python
  • NumPy
  • Design patterns from PyTorch

Educational Value

This project demonstrates comprehensive knowledge of deep learning fundamentals, computational graph construction, automatic differentiation, and the internal workings of neural network training procedures. Building these components from scratch provides invaluable insights into optimization challenges, numerical stability, and efficient implementation strategies.

See Repo Here