<feed xmlns="http://www.w3.org/2005/Atom"> <id>https://karthikvedula.com/</id><title>Karthik's Blog</title><subtitle>Karthik S. Vedula's Blog -- Machine Learning, Coding Tutorials, Photos, and more!</subtitle> <updated>2025-10-30T17:20:42-04:00</updated> <author> <name>Karthik S. Vedula</name> <uri>https://karthikvedula.com/</uri> </author><link rel="self" type="application/atom+xml" href="https://karthikvedula.com/feed.xml"/><link rel="alternate" type="text/html" hreflang="en" href="https://karthikvedula.com/"/> <generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator> <rights> © 2025 Karthik S. Vedula </rights> <icon>/assets/img/favicons/favicon.ico</icon> <logo>/assets/img/favicons/favicon-96x96.png</logo> <entry><title>So how does PCA actually work?</title><link href="https://karthikvedula.com/posts/how-pca-works/" rel="alternate" type="text/html" title="So how does PCA actually work?" /><published>2025-08-25T00:00:00-04:00</published> <updated>2025-08-28T13:11:42-04:00</updated> <id>https://karthikvedula.com/posts/how-pca-works/</id> <content type="text/html" src="https://karthikvedula.com/posts/how-pca-works/" /> <author> <name>Karthik S. Vedula</name> </author> <category term="Learning Interactively" /> <summary>In the age of big data, making sense of high-dimensional datasets is a common challenge. Principal Component Analysis (PCA) is one of the most powerful tools in the data scientist’s toolkit for reducing dimensionality while preserving the essence of the data. By identifying directions—called principal components—along which the data varies the most, PCA allows us to simplify complex datasets, v...</summary> </entry> <entry><title>Thank you ML Club!</title><link href="https://karthikvedula.com/posts/thanks-ml-club/" rel="alternate" type="text/html" title="Thank you ML Club!" /><published>2025-08-19T00:00:00-04:00</published> <updated>2025-08-19T14:57:09-04:00</updated> <id>https://karthikvedula.com/posts/thanks-ml-club/</id> <content type="text/html" src="https://karthikvedula.com/posts/thanks-ml-club/" /> <author> <name>Karthik S. Vedula</name> </author> <category term="ML Club" /> <summary>“What I cannot create, I do not understand” — Richard Feynman. This post is long overdue, but better late than never! As I graduate high school and move on to college, I wanted to take a moment to wrap up this chapter of my ML journey. (For first-time visitors: I founded a Machine Learning Club at my high school, where I taught weekly lectures to over 60 students. This blog became the ho...</summary> </entry> <entry><title>ML Club Video (2024-25): Linear Regression to Neural Networks</title><link href="https://karthikvedula.com/posts/linear-to-nn/" rel="alternate" type="text/html" title="ML Club Video (2024-25): Linear Regression to Neural Networks" /><published>2025-07-11T00:00:00-04:00</published> <updated>2025-07-11T00:00:00-04:00</updated> <id>https://karthikvedula.com/posts/linear-to-nn/</id> <content type="text/html" src="https://karthikvedula.com/posts/linear-to-nn/" /> <author> <name>Karthik S. Vedula</name> </author> <category term="ML Club" /> <summary>Linear Regression is all about lines of best fit for a given dataset. But how do we find lines of best fit? Here is a quick answer: Start with a random line For each data point in dataset a) Find how “close” the line is to the point b) Depending on how close/far the line is, move the line a step towards the point Step 2 can be repeated multiple times (called epochs) I...</summary> </entry> <entry><title>The Algorithm Behind Ragas in Carnatic Music</title><link href="https://karthikvedula.com/posts/raga-interactive/" rel="alternate" type="text/html" title="The Algorithm Behind Ragas in Carnatic Music" /><published>2024-11-25T00:00:00-05:00</published> <updated>2025-08-25T07:22:45-04:00</updated> <id>https://karthikvedula.com/posts/raga-interactive/</id> <content type="text/html" src="https://karthikvedula.com/posts/raga-interactive/" /> <author> <name>Karthik S. Vedula</name> </author> <category term="Learning Interactively" /> <summary>In Carnatic (and in Hindustani, though this blog post will focus on Carnatic) music, there is the concept of raga. At face value, a raga is just a scale or a collection of notes. Think of it as a subset of 16 possible notes: [R \subset {S, R1, R2, R3, G1, G2, G3, M1, M2, P, D1, D2, D3, N1, N2, N3}.] Each of these notes has a name, e.g. $S$ is Shadjam, and the numbers appended to the end of ...</summary> </entry> <entry><title>ML Club Video (2024-25): Dimensionality Reduction</title><link href="https://karthikvedula.com/posts/ml-club-video-24-25-dimensionality-reduction/" rel="alternate" type="text/html" title="ML Club Video (2024-25): Dimensionality Reduction" /><published>2024-11-04T00:00:00-05:00</published> <updated>2025-07-11T11:54:56-04:00</updated> <id>https://karthikvedula.com/posts/ml-club-video-24-25-dimensionality-reduction/</id> <content type="text/html" src="https://karthikvedula.com/posts/ml-club-video-24-25-dimensionality-reduction/" /> <author> <name>Karthik S. Vedula</name> </author> <category term="ML Club" /> <summary>In this ML Club session, we’ll learn how to visualize 1000-dimensional data! High dimensional data is everywhere! How do we do this? We have to represent a 1000 dimensions in 2 dimensions such that the meaning of the data is still preserved. In the session we talk about two very different approaches – Principal Component Analysis and t-Distributed Stochastic Neighbor Embedding. How do these ...</summary> </entry> </feed>
