top of page
Exploring Opportunities in AI & Machine Learning


Positional Encoding in Transformers Explained from First Principles
Self-attention models lack an inherent sense of word order. This article explains positional encoding in Transformers from first principles, showing how sine–cosine functions encode absolute and relative positions efficiently and enable sequence understanding.

Aryan
Mar 4


Types of Recurrent Neural Networks (RNNs): Many-to-One, One-to-Many & Seq2Seq Explained
This guide explains the major types of Recurrent Neural Network (RNN) architectures based on how they map inputs to outputs. It covers Many-to-One, One-to-Many, and Many-to-Many (Seq2Seq) models, along with practical examples such as sentiment analysis, image captioning, POS tagging, NER, and machine translation, helping you understand when and why each architecture is used.

Aryan
Jan 26
bottom of page