I listened to a 33-minute podcast where Microsoft co-founder Bill Gates interviewed Open AI CEO Sam Altman.
It was interesting, and I noted a few things.
1. On Building a Team
OpenAI is a relatively small company.
500 people.
For a 30-billion-dollar company, that’s light. Now, mind you, these are some of the best AI researchers and developers available to Silicon Valley, and they are well compensated (the median package for developers is said to be around $900k per year).
But Altman mentioned something that I believe is equally (if not more) important.
Great people REALLY WANT TO WORK with great colleagues. There’s a deep centre of gravity there. — Sam Altman
“And also, it sounds so cliche, but people feel the mission so deeply. Like, everybody wants to be in the room for the creation of AGI,” he continues.
Now, AGI (Or Artificial General Intelligence) is a term that is going to be coming up very often in this newsletter so it is better to learn what it means right now:
AGI is often described as the “Holy Grail” of Artificial Intelligence. In part because it is persistently sought after and supposed to be incredibly groundbreaking. But also because, well just like the real Holy Grail, its possible existence is steeped in legend.
But back to the discussion about building a team.
Gates himself also noted that it is a difficult undertaking. But he shared something interesting.
“There are so many different forms of talent. Early in my career I thought, ok, just pure IQ and engineering IQ and of course you can apply that to financial and sales. But that turned out to be so wrong. And building teams where you have the right mix of skills is so important,” says Gates.
“The key was the talent that you assembled,” Gates said, noting that “letting them be focused on the big problem and not some near-term revenue thing” helped OpenAI achieve astonishing results in research and eventually, product.
It is true that OpenAI having the leeway to focus on building a great piece of technology, instead of a profitable product, helped them immensely.
But that brought Altman to another important point: funding.
“I think Silicon Valley would not have supported us to the level we needed. Because we had to spend so much capital on research before getting to the product. We said, ‘Eventually the model would be good enough that we know it’s going to be valuable to people’”. — Sam Altman
Altman appreciated Microsoft’s partnership because “this kind of way ahead of revenue investing is not something that the venture capital industry is good at”.
But that’s where my next point comes in.
2. But OpenAI is supposed to be a non-profit…
Yet, throughout the podcast episode, I noticed that OpenAI’s outward disposition of being a non-profit research lab heralding the breakthrough of Artificial General Intelligence is all but abandoned.
I don’t know if this has anything to do with the fact that Altman was talking to one of the founders of OpenAI’s biggest funder right now (Microsoft has put $13bn into the company). But he seemed keen on explaining how OpenAI’s efforts would cater to customers or users—their wants, their pain points, and so on—as opposed to what was the earlier work of OpenAI, which was trying to find how AI worked best.
When you hear a former Y-Combinator-funded founder (and eventual President) talking about “pain points”, you know they’re talking about a business that they intend to use to make a profit.
And perhaps more than anyone else, Sam Altman knows Microsoft is not investing in OpenAI for “research”. They’re here to make a profit.
How can that be, with a non-profit organisation?
First off, OpenAI’s “structure” manages to retain a lot of ambiguity concerning just how much investors like Microsoft are allowed to make from a company that is supposed to be a non-profit.
Even the “non-profit” status is now unclear, seeing as OpenAI as an entity is now subdivided into the actual non-profit OpenAI, Inc., which has a for-profit subsidiary OpenAI Global that they launched in 2019 to raise capital.
According to OpenAI, “Profit allocated to investors and employees, including Microsoft, is capped,” but nobody knows what that connotes.
For instance, OpenAI says returns to first-time investors are currently capped at 100x the value invested. “All residual value created above and beyond the cap will be returned to the Nonprofit for the benefit of humanity,” they say.
Based on that cap, a $1bn investment can yield an investor $100bn.
Microsoft has put $13bn into OpenAI. Do the math.
Microsoft is not exactly an early investor and though later investors’ caps are supposed to be lower than that of the first-time investors, we aren’t sure just how much lower or if it is even lower at all (the details on their agreed-upon cap are not available to the public).
We know that when Elon Musk, one of the founders of OpenAI, left the company, he was at loggerheads with Altman because of the direction the AI organisation was headed in.
Early last year, Musk had this to say about the company:
OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.
— Elon Musk
But this isn’t all that bad right?
If the elusive Artificial General Intelligence myth is the goal of OpenAI, then at least we can be sure flirting with a $3tn market cap, heavily profit-oriented organisation like Microsoft is only a means to an end, right?
Well, even that’s a problem.
OpenAI says, “The board determines when we've attained AGI.”
I don’t know if you got that.
OpenAI determines when they’ve attained AGI.
Not an independent body, not some scientific study made up of an international consortium of computer scientists from all over the world.
OpenAI decides when OpenAI is successful at attaining something that is insistently not a for-profit product, but a technology paradigm.
So if anything can be done as a means to an end, but the end is one only known by OpenAI and no one else (because AGI is a loosely-defined term that is essentially science fiction—something that does not exist in any form right now), then essentially anything can be done as long for as possible.
We are at the mercy of their standards, their practices, and in many ways, their conscience.
And if this was in a non-profit scenario, that’s not such a bad thing.
But in a for-profit?
Anyway, next point.
3. Both Gates and Altman seem keen on regulation
On a brighter note, however, I did notice that both Bill Gates and Sam Altman seemed keen on regulation.
Gates explained that his position, and that of government figures especially in the US, is based on the premise of how poorly prepared governance was for social media—a technology that started in the early 2000s and caught fire in the mid-2010s.
When they say, ‘oh we blew it on social media. We should do better’, social media is still an outstanding challenge and there are negative elements to that in terms of polarisation and even now I’m not sure how we’d deal with that.
— Bill Gates
Sam, in turn, exclaimed that he did not understand “why the government was not able to be more effective around social media” (do we really not understand why? Really?).
“but it seems worth trying to understand as a case study for what they’re going to go through now with AI,” he continued.
But Sam concedes that there is a risk of “too much regulation on AI”. To which I, in turn, think, how could there be too much regulation on something that has the capacity to upend human life as we know it?
But he took the words right out of my mouth by comparing AI regulation to how nuclear regulation currently works, and saying that should be some sort of blueprint.
“We’ve been socialising the idea of a global regulatory body that looks at those super-powerful systems because they do have such global impact. And one model we talk about is something like the IAEA (International Atomic Energy Agency). So for nuclear energy, we decided the same thing. This needs a global agency of some sort because of the potential for global impact”.
— Sam Altman on regulating AI
As Sam mentions in the podcast, “These are the stupidest these models will ever be”. We are talking about a move from relatively small improvements that help speed up a writer or a programmer, to potentially being able to create entire programs, companies, and paradigms from a single prompt.
4. Silicon Valleyians are incurably optimistic about their tech
Another interesting thing I noticed was their extreme positivity.
Hearing two Silicon Valley billionaires discuss changing the world with the tech they frankly swear they “don’t quite understand”, was like watching two evangelists reminisce about their duties spreading the gospel in a third-world country.
On one hand, you can almost taste the optimism, which reminds you how insanely mission-oriented they are. It really makes you think about how much you could benefit from such positive delusiveness in the pursuit of your personal goals.
But on the other hand, there’s a chilling reminder of the covet disbenefit of such techno-optimism.
What I mean is, that these are two individuals who will be the least adversely affected by any blindsiding, paradigm-shifting result their technologies pose to others (they are insanely disciplined, intelligent, and rich, people), yet they are the most excited by them.
Think about how Facebook founder Mark Zuckerberg insists social media is for “building relationships” and not consuming content—yet the company makes billions of dollars from the specific action he insists is “not necessarily bad, but it generally isn’t associated with all the positive benefits you get from being actively engaged or building relationships.” We also know the more time you spend on Meta’s products, the more money they make, and the less likely it is that you’re simply building relationships and not, like, just addicted or something.
Disclaimer: I don’t think tech founders can take responsibility for every single way in which their tech will be utilised (yes, not even when they specifically engineer it in certain ways that maximise profitability for themselves).
I also believe people owe it to themselves to understand the place of technology in their lives and take responsibility for their rates of consumption (even though this can be an uphill battle). It doesn’t take a genius to know that Zuckerberg probably does not spend as much time on social media as the average user does (and if he does, isn’t spending that time doing the same things as the average user—he said that much in this episode of Joe Rogan’s podcast).
But you’ve got to admit, their overt positivity almost seems like a wilful disregard for what the dangers could be in order to thoroughly pursue the upsides to fruition in a way that does not trouble the conscience.
And seeing that distortion field in real-time, a part of you asks yourself, how much of their drive to bring this to pass is coming from a need to altruistically improve the world, and how much of it is coming from a need to leave a “dent in the universe” (you know, to make their mark—even if that mark is a scar)?
Sam Altman made a great point, though, saying AI helping productivity doesn’t just pertain to improving scale, but also helps us to create things that we wouldn’t have been able to before, because of the sheer amount of time it frees up for us to think—and build—other ideas. Trying to compare today’s AI improvements on how we work to how programming got better as higher-level languages became more mainstream for computer programmers, he put it this way,
“Going from punchcards to higher level languages didn’t just let us program a little faster, it let us do these qualitatively new things.”
But that leads me to my final point:
5. What do people do with all that free time?
Luckily for us (and unluckily if you really think about it), even billionaires like Gates and Altman don’t know what this could mean for their own lives, let alone humanity.
There’s a lot of psychologically difficult parts of working on the technology, but this is the most difficult.
In some real sense, this might be the last hard thing I ever do.
— Sam Altman talking about how AGI could change the scope of human society and the world of work
When Gates shared that dilemma, Altman made a point (or hopefully a joke) about how AI could easily solve a major health-related problem that has stupefied us for generations—like malaria for instance—and instead, Gates could spend his time “deciding which galaxy [he’d] like and what [he’d] do with it” (evil villain tech billionaire humour I guess 😭).
That’s the scary part. It’s not that we have to adapt. It’s not that humanity is not super adaptable. We’ve been through these massive technological shifts and a massive percentage of the jobs that people do can change over a couple of generations. And over a couple of generations, we seem to absorb that just fine.
We see that with the great technological revolutions of the past, each technological revolution has gotten faster and this will be the fastest by far. And that’s the part that I find potentially a little scary is just the speed with which society is going to have to adapt. The labour market will change.
— Sam Altman
But Altman is confident “we are never going to run out of problems”
Gates also notes something else we’ve discussed in this newsletter before, which is how AI job takeover fears now hinge on white-collar roles and blue-collar jobs seem safer because the focus on the robotics aspect has slacked.
The prediction—like the consensus prediction—if we rewind seven or 10 years was that the impact was going to be: blue-collar work first, white-collar work second, creativity, maybe never, but certainly last cause that was magic and human. Obviously it’s gone exactly in the other direction and I think there’s like a lot of interesting takeaways about why that happened.
— Altman on ChatGPT and the threat to white-collar work
Altman said OpenAI tried building robots but that it was too early, ultimately.
So, they decided it’s easier to start with intelligence and cognition, and then adapt it to physicality, instead of the other way around. Which makes perfect sense, especially thinking about it in a human context. It’s magnitudes times easier to replace other organs of the body that aren’t the brain. Because ultimately, it does all the cognitive, sensory mapping work. The smarts are the most important piece of the technology, and essentially figuring that part out makes it easier to figure out what other limb or mobility mechanics are required, or irrelevant even.
Anyway, concerning our question of what humans would use all that free time to do, Altman had this to say:
It’s gonna be different for sure but I think the only way out is through we just have to go do this thing it’s gonna happen this is like now an unstoppable technological course. The value is too great and I’m pretty confident. Very confident will make it work but it does feel like it’s gonna be so different.
Even though Altman genuinely believes that giving people better tools helps them do qualitatively better things, we all know that the execution of human brilliance doesn’t necessarily always scale with more technology, tools and opportunities.
Look at social media, where technology deployed in satellites is used today for AR-based avatars on TikTok, or how complex visualisation techniques are deployed to deepfake porn.
Or think about all the processing power that now sits in the pocket of nearly every human on planet Earth (even if it is not a smartphone, that power is millions in magnitude higher than the power that put man on the moon).
And think about what we use all this to achieve from a discovery, productive, creative, and of course, destructive sense.
It is a lot to think about.
Nine other things worth sharing
On Wednesday, I read Kevin de Bruyne’s Players Tribune story and found it quite inspiring. One thing I noted from it was how genuinely he considered his rejections to be very pivotal to his success in football. Because of how it forced him to respond.
I read Barak Obama’s yearly Medium post that details his Favorite Books, Movies, and Music of the Year—this time, 2023.
Microsoft and scientists from the Pacific Northwest National Laboratory just used AI to discover a promising new battery material in just weeks that could reduce lithium use by up to 70%. Full Story here.
Read this in James Clear’s interview:
"The pessimist criticizes, the optimist creates."
Perhaps that’s why Silicon Valley is always so optimistic.Investor Warren Buffett on holding your temper:
"My good friend and hero, Tom Murphy, had an incredible generosity of spirit. He would do five things for you without thinking about whether you did something for him. After he was done with those five things, he'd be thinking about how to do the sixth. He was also an enormously able person in business and was kind of effortless about it. He didn't have to shout or scream or anything like that. He did everything in a very relaxed manner. Forty years ago, Tom gave me one of the best pieces of advice I've ever received. He said, "Warren, you can always tell someone to go to hell tomorrow." It's such an easy way of putting it. You haven't missed the opportunity. Just forget about it for a day. If you feel the same way tomorrow, tell them then—but don't spout off in a moment of anger."
CES 2024 held between Monday and Friday this week. CES is an annual trade show organized by the Consumer Technology Association. The event typically hosts presentations of new products and technologies in the consumer electronics industry. Rowan Cheung, a technology reporter, has threads detailing some of the best announcements from each day on his X page.
Apple is trying to do that thing they do again.
So, a few weeks ago, I shared in the newsletter how Apple intentionally steers clear of buzzwords or commonplace parlance when naming their product features or introducing products. One obvious reason is that not all hot, trendy concepts turn out to be successful in the long term and using the phrases popularly associated with any such unfavourable outcomes could affect your reputation (for instance, imagine a company of that size released an “NFT” feature when it was trendy and now that opinions on the relevance of digital assets have swayed, people start mentally associating their company with the failed concept).
Another reason is that Apple likes to make us believe that they think differently.
I believe that although Apple has extraordinary, best-in-class hardware and software solutions and spends tens of billions on research and development, the Cupertino-based tech giant can also be easily classified as a marketing company.
And over their history, they’ve taken the time to ensure their products are marketed in a manner that dictates to users how they should be seen or used, instead of aligning with what users already understand such technologies to be.
For instance, instead of calling the Vision Pro a pair of goggles that use AR/VR or MR, they call it a remarkable tool that works with “Spatial Computing”.
Look at their recent Ad for the Vision Pro, for instance.If you really pay attention, you’ll see it is reminiscent of the Ad they made for the iPhone when it was first released.
But you’ll also notice that they intentionally steered clear of any iconic movies that made use of anything resembling VR/AR concept glasses (like Ready Player One or Tron), despite the fact that those would’ve been very appropriate reference points for the ad.
They don’t want you to call them VR goggles.
They want you to call them Vision Pro.I watched a documentary about the making of the original iPhone.
Like most documentaries about heavily talked-about things, it focused on some unsung heroes in the process and also revealed the troubled history of the creation process and how heated things got internally.
Learned something new from the doc too: Jobs, popularly heralded as the “visionary” father of the device, was the hardest person to convince that Apple should make a mobile phone. But as it turns out, the hardest people to convince usually end up being the ones who find it easiest to convince others.Guys, just look at this fgs:
And that’s all for today folks.
First off, I‘d like to thank everyone for consistently reading this. You’re the BEST.
But also, please your thoughts when you have any. Let me hear what you think about the things you read.
Till next week.
Regards,
Wisdom Deji-Folutile.
The thing about OpenAI's structure is very important. It's what led to the infighting last year and in truth it hasn't still been addressed. One thing is obvious though, this is a very capable and committed organisation and I'll bet on them to solve the AGI problem before anyone else. I'll also bet that it will be paywalled.
You can read deeper on their partnership with Microsoft here: https://archive.ph/s2CdK