He’s not someone who takes up a lot of space in the world. In fact Sundar Pichai’s soft spoken reflectiveness felt about as far away from your typical tech-bro as is possible to imagine.

How can I tell, did I meet him!?

Not exactly, but I did hover at the back of the room when he came to visit the place where I work. (*Yes* it was his first time in Stockholm and *yes* he asked expressly to come see us!) What follows is an attempt at capturing the gist of the rather intimate hour-long ‘fireside chat’.


Mission: Ever since Pichai became CEO of Google in 2015, and then also for Alphabet in 2019, his mission can be summed up in two words: AI first. That’s ‘first’ as in over-arching design principle, not necessarily as in first to market. “The balance between making bold and responsible progress in the field of AI, is something we think deeply about. It’s an uncomfortably exciting time.

This time is different: AI has had its highs and lows but: “this time is different“. We’re in for a long ‘AI summer’ and what’s causing it is a benign spiral of factors, including access to compute power and a leap in terms of sophistication when it comes to algorithm design. But also the fact that Google itself has contributed to speeding up development by the contributions it’s made to the open source community. All in all, it’s “the most exciting time ever in computer science“, but the field is also moving so fast that it’s a challenge to keep up with what’s happening, even for someone like Pichai.

On the digital divide: AI touches civilization in a deep way and so we have to make sure that technology is more inclusive than ever before, because the stakes are higher than ever; if people get left behind in this development it’ll be a very bad thing indeed. This isn’t just a tech-issue, it’s bigger than that; it’s about how we organize society as a whole. But then “one of the things that gives me hope is how much more friendly and approachable tech has become. It’s not just for programmers anymore.

On collaboration: There’s a commonality between fighting climate change and safely developing AI, in that all countries destinies are intertwined. You can’t have unilateral climate safety, and that’s just as true with AI. So we’ll have to collaborate; it’ll be difficult, but it needs to happen. The web is weaponized to some extent, and we need to figure out frameworks for international cooperation in order to fix that. The kinds of public-private partnerships that are made possible by the Paris accord could be a source of inspiration. Even with all the challenges we’re facing however, “I’m optimistic that AI will play a big role in fighting climate change over time. For example, we see how the quest for fusion based power generation is sped up by AI-driven simulation capabilities.

On robotics: Translating the current modalities of AI to the physical world will be a big challenge since we’re dealing with much higher degrees of complexity. With that said, robotics as a field has been picking up pace lately. It might not yet be as obvious as what’s happening with large language models, but we’re about to see a big shift. Examples are already coming into view; “just the other day I was in the back of an autonomous car having a ten minute phone conversation, and only afterwards did it strike me that there was no driver. I think that indicates an important shift in mindset. People in general are much more open now than what we have assumed. There’s a great opportunity there.

On fear: New technologies have always raised concerns, and that’s often a good thing because it helps us mitigate the possible downsides. But it’s also true that we can be overly pessimistic. For example twenty years ago, as industrial automation started to pick up speed, lots of people predicted massive layoffs, but they turned out to be wrong. Of course AI is different, but let’s keep in mind that it’s likely to also bring all kinds of new jobs that we can’t even envision presently. Like imagine trying to explain to someone forty years ago, what a Youtube influencer does for a living. It would seem absurd to them. If I’d have to guess at what AI will bring, I think of pair programming as an inspiring example. This is something we do a lot at Google, and it’s amazing to see the kind of creativity that is unleashed when two people think out loud about the same problem. I think AI will fill a similar role, something that we can already see now with radiologists, for example, who are assisted by AI in prioritizing and interpreting complex datasets.

On education: In the near future, we’ll see a generation of kids who grow up with their own AI tutors. I think that’ll be a great thing because re-skilling and life long learning will be more important than ever before. I also think cross-disciplinarity will become increasingly important. We need people in tech to widen their horizons, taking courses in philosophy, neuroscience, languages. Likewise I think it’s very important for people in the humanities to get involved with tech. The reason is that moving forward from here, stakes will continue to get higher. What that means is we can no longer afford to just build something and deal with whatever consequences afterwards. We need to be more mindful, we need to handle problems early, to really get things right.

Career advice: Pursue whatever you feel you’re meant to do in the world, you’ll be most impactful if you follow your heart.

On doing no evil: Google was the first tech giant to set up its own AI principles, and we’re following up on them in our yearly progress reports. We’re really trying our best, but it’s also important to understand that no one company can do it all. Academic research is essential, as is public initiatives and regulations. On that note: “Ai is too important not to regulate, and also too important not to regulate well.

The next big thing: It used to take a PhD student five years to map the folding of one single protein, now DeepMind has made 200 Million protein structures publicly available. That’s just one example of tremendous domain specific leaps that are made possible by AI, and we’ll see more of that. I also think the next frontier when it comes to large language models, is going to be figuring out how to give models better access to memory, and better planning capabilities.

And lastly, on Stockholm: “I’m very impressed with the work done by the teams [*our* teams! #so_proud!] I met this morning together with the prime minister. There’s obviously a vibrant startup community here and lots of great talent. The Stockholm office is a node of growing importance for Google, especially when it comes to cutting edge expertise related to cyber security.”