What's really happening at the frontiers of tech?
Where are the hidden gems, the overhyped flops, and the genuinely transformative breakthroughs, and how do we navigate Silicon Valley?
Photo by Rachel Wu
Deep diver, exploring frontiers, building tomorrow
I am Rachel Wu. I currently spend my time experimenting with and ruminating about disruptive technology.
I’ve launched global AI products at Google. Before that, I founded two companies, taught myself how to code at 11, and received my B.S. in Computer Science from Columbia and am an MBA candidate at Stanford.
Find me on twitter @missrachelww, or join my mailing list. If you want to chat, leave me a message in the contact form.
A Little Bit of Silicon Valley Magic
Origins
Imagine escaping Communist China, landing halfway around the world in Silicon Valley, only to be greeted by a 6.9 earthquake. That was my parents' introduction to America.
Shortly after my sister and I emerged, the DotCom crash wiped out my family’s savings. With two young children now on government aid, a crushing mortgage, widespread tech layoffs and no safety net, my father did what any reasonable person would do: he went to the basement and started building software.
That bootstrapped company became the crucible of my childhood. Forget family dinners; we had stand-ups reviewing customer feature requests. Every missed sale felt like a personal failure, every refund request a tragedy. It was a masterclass in the surreal, precarious reality of Silicon Valley – a pressure chamber where the constant need to reinvent is fueled by the ever-present threat of falling behind. Meanwhile, the behemoths of Google and Meta boomed just beyond the fence. They soared. We scrambled. I had to learn to fly.
Learning the Language of Machines
My flight training began early. Thanks to my big sister, a coding whiz who created Java games for fun, plus Sebastian Thrun's legendary "Building a Search Engine in Python" Udacity class, I learned to code at 11. Instead of games, I liked to create applications that helped people learn. I built physics simulations in javascript, a matrix transformation calculator (which became my AP Calculus teacher's official classroom tool), and a peer learning platform.
By 15, when I wasn’t providing tech phone support for my family’s company, I freelanced as a web developer. My clients ranged from Singapore impact funds to teachers with side hustles. Key lesson: every tech advance creates learning gaps that must be bridged.
Processing Natural Language
While bug-hunting in our company's code, I was drawn to the human side of B2B software. It was through my mother, the co-founder responsible for everything non-code (sales, partnerships, revenue, taxes), that I saw language's power and peril. Customer calls were brutal. My mother's accent often drew dismissive, offensive remarks. Language unites us as Sapiens puts it, but it also divides us.
My own language journey was difficult. In second grade, my teachers wanted to place me in special education for my inability to speak in public. My mother's fierce advocacy saved me.
These dual experiences drew me to natural language processing. It’s why I chose intelligent systems at Columbia, and why that introductory AI class felt less like a lecture and more like a divine revelation. Throughout college, I built NLP and AI applications for freelance work, internships, and personal projects - often implementing the latest research such as Facebook’s Word2Vec (precursor to Transformer-era LLMs).
Let’s fast forward: I gave a machine learning talk that led to getting recruited by Google, launched AI globally to help Google’s 5B users when they got stuck, and an MBA at Stanford in the making, I’m now immersed in generative AI at Google Cloud. I like exploring the future of work, the neuroscience behind learning, and the potential of "cyborg philosophy" which aims to use tech to overcome our human limits. I've also weathered my own personal storms from severe burnout, COVID depression, quarter life crisis, near financial ruin (my own fault this time), and learned a thing or two about navigating a large, complex organization.
So read on (if I haven’t lost you yet)! Let’s navigate these times of rapid technological change together.
An AI Practitioner’s Guide: LLM Research Breakthroughs and Implications
Consider this a guided tour through the seminal AI research papers that have made models like Gemini and ChatGPT possible. And, crucially, we'll connect the dots to what you, the practitioner, need to know to build and deploy these powerful tools. Think of this as your cheat sheet for understanding concepts like Retrieval Augmented Generation (RAG), Reinforcement Learning from Human Feedback (RLHF), Parameter-Efficient Tuning (PEFT), and the art of Prompt Engineering.
Enablers and Barriers of AI Innovation at the Frontiers
I talked to 33 Google DeepMind researchers to find out what's really enabling and blocking innovation in the enterprise. The answers involve bureaucratic hurdles, the surprising power of relationships, and why even a tech giant can struggle to build a 'sandcastle' with a thousand architects. Get ready for a peek behind the curtain, where the future of AI is being built – one quirky, frustrating, and inspiring step at a time.
“If Nobody's Angry, You're Not Disruptive Enough”: Top 3 Wisdoms from Building AI at Google
Leadership mantras, well-intentioned platitudes, and the occasional corporate koan - these flow freely at large companies like candy from a Pez dispenser. But amidst the noise, a few genuine insights managed to penetrate my cynical, startup-trained brain. These weren't just catchy slogans; they were fundamental shifts in perspective that changed how I approach work, life, and even the impending heat death of the universe (more on that later).Here are a few genuine insights managed to penetrate my cynical, startup-trained brain. These weren't just catchy slogans; they were fundamental shifts in perspective that changed how I approach work, life, and even the impending heat death of the universe.
5 years launching AI at Google: 10 Lessons in Navigating Ambiguity
My first day at Google HQ, I felt like a lost sock in a washing machine – tumbled, confused, and wondering how I got there. I'd gone from building AI products in the scrappy, ramen-fueled world of startups to the polished, multi-generational ecosystem of Google. Let's just say the learning curve was…steep. And while I can't promise you a magic formula for navigating the chaos, I can share some hilariously awkward missteps and hard-won wisdom from my five-year AI adventure.
Why businesses need to build an AI Quality Flywheel
I talk about the major challenges in scaling RLHF—data scarcity and quality, human cost and time, and inherent process complexities. Then I go into smarter data sampling, AI-assisted labeling, and the concept of an "AI Quality Flywheel," to overcome these bottlenecks and accelerate LLM development.
The AI Bottleneck: Why Early Chatbots Floundered
We're drowning in great demos and breakthroughs at the research level – algorithms that can beat grandmasters at Go, generate realistic images, and even write passable poetry. Yet, when it comes to practical, scaled AI in the enterprise, we're often stuck in the Stone Age. So what gives?
From Research to Reality: A Practical Guide to Machine Translation in Customer Support
Thinking about using AI translation in your business? As part of a product incubation PM team, I led the global chat launches and rollouts to productionize and scale the ML models. I’m sharing this internal document, used to launch Google's global machine translation efforts, provides a no-nonsense guide to the technology's capabilities – and its inevitable imperfections.
Google TGIF: An Unexpected Destination in my AI Journey
During my time on Google's Product Incubation team, I had the privilege of launching several AI-powered B2B2C products, including real-time translation and AI-driven customer support. Seeing these projects celebrated at a company-wide TGIF, long after I'd moved to Google Cloud, was a powerful reminder of the delayed – but ultimately significant – impact of our work, and the challenges inherent in driving disruptive innovation
Blockchain’s Trough of Disillusionment
Remember the 2017 crypto craze? I do. It was the year I dove headfirst into Bitcoin and Ethereum, captivated by the promise of a decentralized future.
Why Everyone Should Read AI Research Papers
In the world of AI product management, where the hype cycle spins faster than a GPU training the latest LLM, there's one practice I advocate strongly: read the research papers. Ask any software engineer, and they'll tell you they can distinguish a good AI PM from a bad one based solely on how they talk about the technology. If you're building AI-powered products, skeptical about AI, or just tired of the hype, this is essential.
Beyond the AI Hype: What will and needs to happen to make AI useful?
Having created a couple of Google's highest-rated internal courses on LLMs, covering everything from RLHF to prompt engineering, I've seen firsthand the excitement and the challenges. And these takes are based on teaching and reviewing the last 8 years of research. As of 2023, my main takeaway is this: we need to move beyond the theoretical and focus on the practical. That means grappling with the proliferation of models, the need for transparency, and the very real question of how to make these powerful tools truly useful.