Post

ML Club Video (2024-25): Dimensionality Reduction

In this ML Club session, we’ll learn how to visualize 1000-dimensional data!

High dimensional data is everywhere!

How do we do this? We have to represent a 1000 dimensions in 2 dimensions such that the meaning of the data is still preserved. In the session we talk about two very different approaches – Principal Component Analysis and t-Distributed Stochastic Neighbor Embedding.

How do these approaches work? Watch the video to find out!

Thank you to the Statquest video: https://www.youtube.com/watch?v=NEaUSP4YerM for helping me with this lecture. Highly recommend the channel!

This post is licensed under CC BY 4.0 by the author.