ChatGPT/Every illustration.

Google’s AI Vision: Make Tech Human Again

The company’s hurricane of AI product releases at Google I/O adds up to a clear vision for AGI and for humanity

33 5

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Oshyan Greene about 2 months ago

While I share an interest in and some hope for a techno-utopian future, I am not sure that having Google lead it is particularly encouraging to me (a decade ago when "Don't be evil" was still a thing I might feel differently 😄). I have a hard time swallowing the vision of a remade Google barely a few years out from some of its biggest (and some still ongoing) blunders and "inhumane" choices. Killing well-loved and not terribly expensive (to Google) products is just one example of that. It was a year ago, less from some perspectives, that Google seemed deeply behind in the AI race, and I'm confident that wasn't *just* because of what they hadn't shown yet. They clearly accomplished great things in righting the ship on their AI research since then, but they were definitely caught by surprise by OpenAI and others for a while and were struggling to catch up. It's great that they're doing well now, but it will take some consistency of such execution and output for me to have faith they can sustain it.

The "human" element of the whole show is an interesting angle. I don't want to trivialize the feelings you and perhaps others had (including Hassabis on stage), those feelings are real and there *is* something truly significant going on that is worth having strong feelings about. The potential is tremendous. But the environment of a conference itself, that very social, in-person nature of it, and the excitement of all this wondrous new technology, all of that is a multiplier for such feelings. And a skilled company - and its leaders - can make you feel powerfully positive and hopeful in that moment, yet still be speaking on behalf of and leading a company that ultimately regresses to the Capitalistic baseline: exploit opportunities (humans are at the root of most/all of them), make money, maintain that by whatever legal and sometimes pseudo-legal means possible. How do we profit off of the human, the creative, the personal, and what are the downsides of that? I don't think we can be genuinely hopeful about a future led by any company without understanding the latter part of that question much more fully.

Alex Duffy about 2 months ago

@Oshyan Your point is a good one and I'm really glad you brought it up. It's also not lost on me, which is why I tried to hedge a bit by calling out explicitly that they are an extremely profitable capitalistic organization and some of their shortcomings from the past.

With that said, I think the reality of the situation is there are probably only a handful of companies that realistically have the chance to create by their definition of the term. And it was really reassuring to me to see that many of the people involved, from the technical point of view, seem to be doing it for the right reasons AND have enough leadership backing currently, to do it in a way that they believe is right.

I think this is really important because even if they(Google leadership) stop doing that in the future, the technical knowledge of how it was accomplished will reside within these individuals that are clearly idealistically motivated and increasingly less financially motivated as they've accrued significant wealth and will continue to do so.

I really welcome the open sourcing of models like Gemma, the willingness to share experiments in their labs and invite experts to the table, and just how cheap / accessible their technology is generally at the moment. This is a pretty stark difference when you compare them to, say, Apple, one of the other few companies with the resources and platforms needed to accomplish this goal, who has, by and large, stuck to their historical precedent of secrecy and closed off-ness

We'll continue to keep an eye on this closely and share what I learned! Please keep the feedback coming

Agree and made me buy more shares

Dirk Friedrich about 2 months ago

Thank you for this emotional yet enlightening report, Alex!

That moment with Hassabis sharing about his grandmother hit me too. There's something happening here that goes beyond the usual tech demos - Google seems to be figuring out that the human relationship with AI matters as much as the capabilities themselves.

Oshyan's point about the "human element" gets at something I've been thinking about: we're not just watching AI get more powerful, we're watching the first hints of what beneficial AI alignment might actually look like in practice. Hassabis saying AI should "amplify what makes us human, not replace it" isn't corporate speak - it's a design philosophy that could make or break how this all turns out. And he keeps repeating these deeply rooted convictions in all public statements and interviews - making him a role model and beacon among the "AI power holders / elite" imho.

What especially got to me was your description of people at tables needing to process what they'd seen out loud. Because that is exactly my "current mode of coping". That's not just excitement about new tech - that's humans recognizing we're in the middle of something unprecedented and trying to figure out what it means for us. The very idea of my children growing up inside synthetic agency loops, of truth fracturing, of attention dissolving into dreamworlds overwhelms me. It is live witnessing my own "assumptions of what is coming" come true faster than I have hoped deep inside, like seeing the expected storm touch down on my own roof. Even if you’re wearing armor (here: you are "up-to-speed" and have figured out the new modus operandi), your loved ones might not be. And no model prepares you for that realization.

The vulnerability, the artist collaborations, the emphasis on domain experts - it all points toward AI development as relationship-building rather than just capability-scaling. It's making me think about approaches like the Parent-Child Model ( https://bit.ly/PCM_short ) for AI alignment - where instead of trying to control superintelligence through constraints, you raise it through trust, reverence, resonance and developmental scaffolding. Fortunately, Google's approach feels like steps in exactly that direction.

Your closing thought about AI helping us "think bigger" captures what I hope we're moving toward. Not just smarter tools, but technology that actually enhances rather than diminishes what makes us human. Question: Do you think this human-centered approach is Google's competitive differentiator, are we even seeing the early signs of what all AI development will have to become in general - or will this (at least now human / humanity-centerd looking) approach be a burden in the race for AI supremacy, slowing them down?

Ray Schnitzler about 1 month ago

The Singularity generally presumes that AGI will lead to exponential advancement. I think you're observing that we've already hit that inflection point and oh-by-the-way we'll probably, eventually hit something we all agree is true AGI.

This might be a throwaway notion, but I actually think it isn't: the standard interpretation of The Singularity is not just AGI+Exponential Growth - it's fundamentally a description of a process that is unavoidable, where humans can no longer keep up, and where we become passive beneficiaries (we hope) of the fruits of the explosion. We slowly accumulate a Critical Mass, and then watch the rocket take off. And we sit in the bleachers, and cheer, and collapse into an existential crisis.

You are, I think, offering a different view that places us squarely IN the process. We're not in the bleachers, we're ecstatically riding the f**ing rocket, our arms wrapped around it, holding on for dear life, hands joined at a game controller, thumbs mashing wildly.

Perhaps this is only a phase, and true AGI *will* leave us behind, but maybe not.

While technology is often viewed as offloading certain kind of tasks from humans to machines that can do it better, perhaps we are the offloaded existential driver for the machines. Sure, they *can* do anything and everything, but perhaps there will always be a role for us to help them answer "but why bother."