1. MLAB
I landed in SFO for the first time between Christmas and New Year’s 2021. I was attending MLAB (“Machine Learning Alignment Bootcamp”), described as a “a bootcamp to bring people interested in AI Alignment up-to-speed with the state of modern ML engineering”. The organizers (Redwood Research and Lightcone Infrastructure) had paid for me, an alignment-curious Biomedical Engineering Master’s student from the UK, to come and stay in Berkeley for a month to study ML engineering and interpretability.
Only a few months prior, I first learned about “EA” and “AI safety” and “Existential Risk” and “Rationality”. I had attended a few AI safety and EA reading group sessions at my university (Imperial in London) and the whole thing immediately fascinated me. Though I didn’t share the EA zest for impartiality, and said as much at the EA meetings, they were still happy to have me, so I kept coming. I thought (and still think) the EAs were cool because they cared about scale and actually achieving their goals rather than just signaling how beautiful their souls were. But the thing that impressed me most was that, unlike any group I had come across before, they seemed to genuinely care about future generations in a way that made sense. Not in the degrowth environmentalist way, but in the “yay humans let’s take over the galaxy”, “economic growth is great because it cures diseases and raises quality of life” way. And this was the first time I had met people who took existential risk seriously. I too sure didn’t like the sound of humanity dying out (though at the time I would have said the greatest threat to us is war, via nuclear/bioweapon use). The EAs were (are) my allies here—I may not be willing to give my kidney to a stranger, but I care about the future beyond my lifetime and my “children's children's children” (a phrase from HPMOR).
I remember being interviewed for MLAB. I was a little wary of coming across as more “bought in” than I really was. I explained that I had just learned about the whole AI X-risk thing, and though I had reasons to be skeptical, I had given almost zero thought to the topic before, so it seemed sensible to give it at least some serious thought before writing off the idea that AI may pose a serious threat in the future.
The months beforehand had been a rapid introduction to the subculture I now know so well. I read HPMOR, Unsong, many LessWrong posts, Slate Star Codex. I found the most pro-capitalist people I had ever met (growing up in London, my classmates were always far more socialist-inclined than I, so meeting people who were unafraid to say simple things like “markets are good” was amazing). Of course I would later go on to meet Real Right-Wing Americans. But at this point, my median acquaintance was so far away from this that a genuine libertarian would seem like a unicorn from another land.
I took an Uber from SFO to my hotel in Berkeley in the late evening. It was still COVID time, so there was a lot of wariness around in-person congregation. Nevertheless, the bootcamp organizers were planning to let us all into their office after a week of semi-quarantine in our rooms with only limited in-person interactions, provided we did daily PCR tests and wore masks. It sounded like a great deal to me after spending months mostly isolated at home.

However, there were some exceptions for people who arrived early, which included me. On New Year’s Eve, the organizers invited us to check out the office and complete some of the prep work (einops exercises) there, and then attend a party in someone’s garden.
The office was Constellation, a place now very familiar to me. I turned up on NYE and met some of my fellow bootcamp attendees. They were all very friendly and smart. We carpooled to the party. It was outdoors to minimize infection risk, and so I spent the whole evening shivering from the cold. Still I had a good time. I even met people who had authored some of the alignment papers I had read back in London.
MLAB ended up being awesome. I learned a ton. I met many interesting and smart people. I hiked alone in the Berkeley hills without a phone connection and got wonderfully lost before finding my way back by stalking a dogwalker. I hiked with my fellow bootcampers while wearing N95 masks and almost stumbling down steep slopes due to my poor balance. I played new board games. I debugged CUDA OOMs. I tested positive for COVID but didn’t have any symptoms. I met trans people for the first time. I took calls with my Master’s project supervisor at 4:30am from my hotel room kitchen without ever mentioning that I was actually in California. I gained the confidence to rewrite my Master’s project code in PyTorch. I walked into a Trader Joe’s and bought a packet of almond butter granola that only lasted two days. I almost finished listening to the Worm audiobook. I was delighted by the absence of alcohol at most social gatherings. I gained an appreciation for Blue Bottle Coffee. I went on runs to the Marina in between pair-programming sessions. I attended weekly outdoor parties for MLAB attendees and associates, remarking at one of them that this was the best month of my life so far. I walked past FTX billboards and heard talk of crypto-billionaire-funded conferences in the Bahamas. Someone suggested that instead of starting my new grad job at Stripe, I should accept a grant and do “independent research”. I started saying “priors” and “something something” and “update” and “-shaped” and “orthogonal” and “this is false”.
The Rationalists were cool, and AI risk was probably real, I decided. But I wasn’t ready to do anything about it. I came back to London and started my new grad job. It was fun actually. I merged many PRs, wrote design docs, and gradually started feeling like a real software engineer.
2. Future Forum
Then, in summer 2022, a few months into my first job, I heard about “Future Forum”. It was going to be a conference in SF about future-flavored topics like AI, longevity tech, and so on. You had to apply and be accepted to attend, and I was lucky enough to get a ticket. Patrick Collison was supposed to be giving a talk (though he later cancelled), so I managed to convince my manager to give me extra PTO days to attend the event.
I landed in SFO for the second time. Again, someone was paying for my hotel, though this hotel was much shoddier than the first one. I have a pretty acute sense of smell and am convinced the place had a mold problem (the only negative part of the trip). It was an airport hotel right near SFO because the (initial) conference venue was nearer to the airport than the city.
I was jetlagged, waking up super early every morning. Luckily, this was America, so the local Starbucks (which was located in a neighboring hotel) opened at 4am. I’d walk there in the dark, buy a coffee and a biscotti, and watch the sunrise over the water. Those mornings are some of my most beautiful memories.
The conference itself was also wonderful. It started in a huge Hillsborough mansion called “Neogenesis”, then we got kicked out because the neighbors complained to the police about the noise, so we ended up moving to a regular conference venue in the city (thanks to some logistical magic from the conference organizers).
At this second venue (which bordered the Tenderloin district), someone mentioned that they were chased by a homeless guy with a knife just outside—my first acquaintance with what I now consider the most significant problem with SF. Aside from that, the conference proceeded smoothly.
People around me were whispering “wow, it’s so-and-so from Twitter,” though I usually had no idea who so-and-so was. I met a bunch of new cool people and caught up with some fellow MLAB alums. I gained my first 50 Twitter followers. I sat on the floor in a circle listening to people ramble about René Girard’s mimetic theory of desire. I attended talks from Jaan Tallinn where he described his love for dancing, from Celine Halioua who was developing longevity drugs for dogs, from Anders Sandberg on Blueberry Earth. I failed to bump into my now-husband, who I later learned also happened to be there.
I was sharing my hotel room with a roommate who was much less jetlagged than I was, so we barely saw each other. But I learned that she was doing “MATS”, another AI safety program I hadn’t heard of before. Later on in the conference, I met a MATS organizer who suggested that I could join some of their live online lectures. It sounded intriguing, but I didn’t end up joining any due to the timezone discrepancy with London. Little did I know that a year from then, I’d be landing in SFO again to do MATS in Summer 2023.
3. MATS
It took almost another year of regular SWE life before I felt inspired to dip my feet back into the world of softmax(QK^T). The first version of ChatGPT had come out, and my expectations of AI progress speed were surpassed. Though even pre-ChatGPT, I loved playing around with the GPT-3 playground and was surprised by the extent to which my programmer colleagues ignored LLMs.
I started reading ML papers again, and applied to MATS on a whim one night, painstakingly typing out my answers to the initial application questions on my iPhone while lying in bed in the dark. I was very happy to receive an offer, but I thought it was a big risk to quit my stable job to do a short educational program. Though I had some experience from my Master’s and MLAB, I still thought it was quite possible that I’d dislike or be bad at ML research and want to go back to software engineering. And at Stripe, I saw some interesting career opportunities—I was considering a move to a team that built tools leveraging the new LLMs or another doing ML for payments fraud detection. Luckily, I managed to convince my manager and his manager to let me take unpaid leave instead of quitting, to maintain optionality. I was genuinely open to taking ML roles at Stripe after the program and told them as much.
As was the trend with time in the Bay Area, my months at MATS again became the best months in my life at that time. Waking up to do independent research on topics I was curious about, again in the beautiful Constellation office, this time without masks, with access to GPUs, open-source Llama model weights, and an endless supply of little cans of cold brew coffee? Truly the blessed life.
I didn’t need a lot of convincing to accept a fixed-term contract to do research with Anthropic, and then a full-time offer from them. And the career change has been great so far—it’s a crazy time to be working on large models.
It’s now Summer 2025, and I live in SF with my 2.5-month-old son and husband. They make every new month the best month in my life so far. I still say “priors”, though I’m trying to reduce the unnecessary use of jargon. I’ve spent hours in Lighthaven and open the exquisitely designed LessWrong homepage almost daily. I’m also more optimistic about AI alignment than I was three years ago.
There are both positive and negative things about the Rationalist subculture, but my net impression is clearly positive. Though I’d say the same for most other Bay Area techie subcultures, be that the Progress Studies crowd, the Startup Bros, the Hackers, and so on. It’s great when people are intelligent, enthusiastic, curious, truth-seeking, ambitious, and unafraid to discuss weird or controversial ideas. That’s what you get in the Bay, and I think it’s incredibly valuable.