Was this newsletter forwarded to you? Sign up to get it in your inbox.
Contrary to popular belief, this generation of artificial intelligence technology is not going to replace every single job. It’s not going to lead employers to fire every knowledge worker. It’s not going to obviate the need for human writing. It’s not going to destroy the world. We don’t have to strafe the data centers or storm Silicon Valley’s top labs.
The current generation of AI technology doesn’t live up to the AGI hype in that it can’t figure out problems that it hasn’t encountered, in some way, during its training. Neither does it learn from experience. It struggles with modus ponens. It is not a god.
It does, however, very much live up to the hype in that it’s broadly useful for a dizzying variety of tasks, performing at an expert level for many of them. In a sense, it’s like having 10,000 Ph.D.’s available at your fingertips.
The joke about Ph.D.’s is that any given academic tends to know more and more about less and less. They can talk fluently about their own area of study—maybe, the mating habits of giant isopods, or 16th-century Flemish lace-making techniques. But if you put them to work in an entirely new domain that requires the flexibility to learn a different kind of skill—say, filling in as a maître d' during dinner rush at a fancy Manhattan bistro—they’ll tend to flounder.
That’s a little like what language models are. Imagine a group of Ph.D.’s versed in all of human knowledge—everything from the most bizarre academic topics to the finer points of making a peanut butter and jelly sandwich. Now imagine tying all of the Ph.D.s together with a rope and hoisting a metal sign above them that says, “Answers questions for $0.0002,” with a little slot to insert your question. By routing the question to the appropriate Ph.D., this group would know a lot about a lot, but they still might fail at a task sufficiently new to the recorded sum of human knowledge.
This is in line with University of Washington linguistics professor Dr. Emily Bender’s idea of the “stochastic parrot”—that language models are just regurgitating sequences of characters probabilistically based on what they’ve seen in their training data, but without really knowing the “meaning” of the characters themselves.
It’s also in line with observations made by Yann LeCun, chief AI scientist at Meta, who has repeatedly said that large language models can’t answer questions or solve problems that they haven’t been trained on.
There’s room to quibble with whether both of their takes truly represent the current state of the technology. But even if you grant their point, what both Bender and LeCun misunderstand is that they think the powers of the current generation of AI technology is a letdown. They say, in a pejorative sense, that language models are only answering questions they’ve seen in some form in their training data.
I think we should get rid of the “only.” Language models are answering questions they’ve seen before in their training data. HOLY SHIT! That is amazing. What a crazy and important innovation.
LLMs allow us to tap into a vast reservoir of human knowledge and expertise with unprecedented ease and speed. We’re no longer limited by our individual experiences or education. Instead, we can leverage the collective wisdom of humanity to tackle challenges and explore new frontiers.
For anyone trying to figure out what to use AI for, or what kinds of products to build with the current generation of technology, it implies a simple idea: Don’t repeat yourself.
The Only Subscription
You Need to
Stay at the
Edge of AI
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators
Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Comments
Don't have an account? Sign up!
Ahhhhhh!!! That's the Dan Shipper I fell in love with the first time. Clear. Uncomplicated. One main point and no need to crawl through multiple screenshots. We can feel the authority--the lived experience of Dan. He didn't get this from a book or a video. He got this by doing the work and then reporting back to all the Earthlings in a simple exercise we can all do, immediately: Take the time, just for a day, to get aware of your every movement, write it down, and then, tonight or tomorrow, notice where there is repetition. Then make a good decision. Genius.
@georgia@communicators.com thanks Georgia!
Fantastic
ai might not destroy us, but it needs to save us. we're doomed otherwise.