Why I Joined the Deep Atlas AI/ML Accelerator
I've been using AI tools every day for about a year now. Claude in my editor, models wired into a few Slack workflows, the odd prompt hack to make a Salesforce integration a bit smarter. The more I used them, the more I started noticing something uncomfortable. I could ship with these tools, but I couldn't really explain them. I couldn't always predict why a model behaved the way it did, and if you asked me what was actually happening under the abstraction I was calling a function, I'd shrug.
At some point that gap stopped being a fun curiosity and started feeling like something I needed to actually address, before the field moved another two years out of reach.
The terrain is wide, and also pretty deep
AI as a field is wide. It's also pretty deep. From outside, it's honestly hard to even tell which direction to walk in.
There's classical machine learning, which is older than most of the engineers using it now. There's deep learning, which is what most people mean when they say "AI" today. There's the whole pretrained model and fine-tuning world. Then there's the newer agentic, RAG, context-engineering layer, which is where most of the production AI work seems to live. And under all of it is the math. Linear algebra, probability, the calculus that makes gradient descent gradient descent.
I had partial pieces of all of these, and none of them really connected. If I read a paper, I'd track the engineering fine but lose the math halfway through. If I watched a tutorial, I could reproduce the code without really understanding why any of it worked. There are worse problems to have, but it's the kind of problem that doesn't fix itself by accident.
What I'm actually after
What I want isn't "get good at prompting" or "land an ML role." It's narrower than that, and probably a little weirder.
I want a coherent map of the field. When I read about a new model, I want to know where it sits in the lineage. What came before it, what trick it's borrowing from, what it's trading off. I want to be able to evaluate a paper without outsourcing my judgment to whoever's tweet I happened to read first. And I want the math to be fluent enough that the abstractions feel earned instead of magical.
Breadth, so I know what exists. Depth, so I know how it works. They reinforce each other. You can't really hold a wide map of a field if every part of it is a black box, and you can't go deep on any one piece without some sense of what's around it.
That's really the project. The program is just a vehicle for it.
Why a program, and why this one
Self-study, for me, has a half-life. I start strong, lose the thread by week three. I've started enough courses to know how they end. Three browser tabs left open, and a YouTube algorithm that has decided I'd rather watch something else. A structured curriculum with a fixed schedule and other humans going through it solves the consistency problem by construction.
I also wanted something that started with the math and didn't apologize for it. Most "AI for engineers" content skips the foundations to get to the part that demos well. I needed the part that doesn't demo well.
Deep Atlas came up through a colleague at my company who'd been through it and posted an honest review internally. The curriculum runs from foundations (linear algebra, calculus, stats) through classical ML, deep learning, pretrained models, and the more recent agentic systems stuff. Roughly the breadth-and-depth ladder I'd been trying to put together for myself. The cohort format addressed the consistency problem. After an application and a call, I was in.
That's the program in two paragraphs. What I'm going to write about isn't really the program. It's what I find as I go through it.
Being a beginner again
I'm right at the start. The first sessions have me re-deriving things I should remember from school and don't. Sitting with probability distributions as actual things, not just definitions I once memorized for a midterm. Going back to linear algebra without the survival-mode engineering-math mindset I had in B.Tech. Looking at simple models and trying to follow the math instead of waving at it.
There's something useful about being a beginner again at something. Most of my day-to-day work is in territory I know pretty well, and the muscles for "I actually don't understand this and need to slow down" had gone a little soft. Good to use them again.
What I'll be writing here
What I'll actually share here are the projects I take on, and what I learnt from each one. A model I trained, an experiment that worked, a fine-tune I got wrong before I got it right. The post is whatever the project taught me. If it didn't teach me anything, it stays in my notes.
If you're somewhere similar, I hope these end up useful. If you're past this point, I'd love to be told what I'm getting wrong.
More soon.