What Open AI's New Board Should Do Now
Fix the corporate structure, recruit AI moderates, dismantle the cult of Sam, and embrace the 'open' in Open AI
The dust appears to have settled at OpenAI. Sam Altman is back as CEO. The company is announcing new partnerships and programs while internet rumors of an imminent ChatGPT 4.5 release are rampant.
Though little information has been revealed about what caused Sam’s firing, OpenAI’s new board chairman, Bret Taylor, has promised to build a new board, stabilize the organization, and enhance the governance structure.
Despite the progress, the problems at the root of OpenAI’s issues still largely remain. If any current or soon-to-be OpenAI board members are reading this, here’s what you should be doing next:
Fix the governance structure.
“OpenAI’s structure -- unlike many of the best-known examples of nonprofits that own for-profits -- is the corporate legal embodiment of trying to have its cake and eat it too-- the governance of an organization driven solely by a mission with the financing of an organization driven by traditional things like growth, market share, and profit.
OpenAI’s problem was that its board had no fiduciary duty to the company or its shareholders. Its mission of “building safe and beneficial artificial general intelligence for the benefit of humanity” is wholly misaligned with how the rest of the company operates. In order to attract top talent, the company offered competitive pay packages with large amounts of equity, and in order to fund the hiring of that talent and the computing power needed to train its models, the company raised a reported $13 billion from an array of institutional investors and companies such as Microsoft, its largest investor.”
This misalignment of values is far more important than the specific structure and legal classification of its entities, so the urgent focus should be ensuring everyone is optimizing and incentivized for the growth and eventual profitability of the company. That means, among other things: 1) Compensating board directors, partially or entirely, with equity so that they are in alignment with employees and shareholders; and 2) Ridding itself of any notion that the company’s mission of pursuing artificial general intelligence (AGI) is above its interests as a company. For example, it's untenable to have language in its charter that invokes a hypothetical scenario where the board might shut the company down to help another organization pursue AGI.
Pursuing AGI as an ultimate goal sounds nice, but muddying the waters between building something for the benefit of society and building something for the benefit of shareholders is what caused this mess in the first place.
While the near-term focus should be on aligning the organization’s values, the longer-term focus should be on updating the structure and the words governing it. The corporate structure and the charter are messy, and a mountain of legal liability lurks in the messiness. If directors act in the interest of society, they violate their duty to shareholders and vice versa.
Thus far, there have been no statements or news reports suggesting any formal structural changes are on the horizon. I suspect this is because many key decision-makers genuinely feel committed to their interpretation of the mission or feel that it is too inextricably linked to the company’s identity.
Build a board of independent, tech-literate AI moderates with deep corporate governance knowledge.
This misalignment in governance and structure gave rise to a deeper misalignment in focus among OpenAI’s board, with some directors seeing its technology -- and purpose -- through the lens of its potential benefits. In contrast, other members of the board were more focused on its existential risks.
Like splintering sects of some AI religion, these opposing board camps were emblematic of some of the deeper rifts within the AI community:
Effective Accelerationism (often shortened to “e/acc,” pronounced “e-ack”) is a loosely organized movement devoted to the no-holds-barred pursuit of technological progress. The group believes that artificial intelligence and other emerging technologies should be allowed to move as fast as possible, with no guardrails or gatekeepers standing in the way of innovation.
Effective Accelerationism began as a cheeky response to an older, more established movement — Effective Altruism — that has become a major force in the A.I. world. E.A., as the older group is known, got its start promoting a data-driven approach to philanthropic giving, but in recent years has been worrying about A.I. safety, and promoting the idea that powerful A.I. could destroy humanity if left unrestrained.
Bret Taylor wrote that the company “will build a qualified, diverse board of exceptional individuals whose collective experience represents the breadth of OpenAI’s mission – from technology to safety to policy.”
This begs the question, what AI philosophies should OpenAI’s next wave of board directors hold?
The answer is they should avoid anyone who is firmly ensconced in any of these particular camps. The loudest voices in these groups come across as extremists. Correct answers rarely live at the extremes. Instead, OpenAI’s next board directors should fully embrace the nuance of this debate and be able to acknowledge and articulate good faith arguments on all sides.
But the board needs things other than just AI moderates.
I’ve long argued that many private company boards are nothing more than ‘CEO fiefdoms’ “controlled by overreaching CEOs or group-thinking insiders.“
OpenAI’s last board, by being emboldened enough to fire one of Silicon Valley’s darlings, proved it was anything but. Now that Sam returns with what I believe is even more power, there’s a chance he will shape the new board in the CEO-fiefdom mold.
This risk is precisely why new board members need three qualities:
Relationship Independence. These new directors can’t be beholden to Sam Altman. Given what happened a few weeks ago, there is likely strong momentum toward finding board members with some implicit -- or explicit -- loyalty to Sam. Bringing on these people would be a mistake. Because of his newfound power, it will be crucial to have board members with the knowledge and gravitas to challenge Sam, the executive team, and each other.
Strong Corporate Governance Instincts. OpenAI is no longer the quirky, research-focused non-profit started in 2015. Fancy names and high-profile members who don’t have deep, intuitive knowledge about corporate governance, capital markets, and highly growing software businesses need not apply. Board directors must be fluent in multi-billion dollar transactions and make decisions that withstand the strictest of media scrutiny.
Strong Technological Literacy. It wouldn’t be wise to require the entire board to consist of deep learning researchers or even computer scientists. Still, it is reasonable to expect each board member to be at least a prolific, intuitive user of various consumer-facing generative AI products.
Dismantle the Cult of Sam.
In the 48 hours after his firing, Sam Altman used social media and his internal social capital to put on a masterclass in rallying OpenAI employees to his side. Ultimately, this employee campaign forced the then-board’s hand to agree to hire him back.
However, several credible reports since the attempted coup suggest that many employees were less motivated by a belief in Altman as CEO than it might have seemed. As reported in Business Insider:
Given the absence of interest in joining Microsoft, many OpenAI employees "felt pressured" to sign the open letter, the employee admitted. The letter itself was drafted by a group of longtime staffers who have the most clout and money at stake with years of industry standing and equity built up, as well as higher pay. They began calling other staffers late on Sunday night, urging them to sign, the employee explained.
This makes sense. I find it hard to believe that employees were motivated by anything other than self-interest. The real downside of Sam’s departure was that it threatened the planned secondary sale of OpenAI shares that valued the company at $86 billion and would allow employees to sell shares.
A scheduled tender offer, which was about to let employees sell their existing vested equity to outside investors, would have been canceled. All that equity would have been worth "nothing," this employee said.
The former OpenAI employee estimated that, of the hundreds of people who signed the letter saying they would leave, "probably 70% of the folks on that list were like, 'Hey, can we, you know, have this tender go through?'"
If this is even partially true, then OpenAI employees cared more about continuity than they did about Altman specifically. That would be a good thing. Sam Altman may be a transcendent Silicon Valley icon in the mold of Steve Jobs, but no well-run company can afford to rely on one person to lead it. Good governance means ensuring the organization’s fate isn’t tied to one person, even if that person is its charismatic founder.
Whether employees love Sam or love their upcoming payday, OpenAI’s board needs to prioritize better understanding OpenAI’s culture -- something the last board gravely missed -- and work to build out an executive team that engenders employee loyalty outside of its CEO.
Embrace the “Open” in OpenAI
Upon its founding in 2015, OpenAI pledged to “build value for everyone rather than shareholders” and promised that everything from its research and code to its patents would be “shared with the world.” This founding mission appeared to reinforce the “open” in OpenAI with open research, open source software, and open access to AI’s benefits.
But the organization, pressured by competition and the need for resources, has strayed from that vision. Not only are the company’s largest models, not open source, but ChatGPT, its flagship model, ranked dead last for openness in an assessment of popular large language models done by researchers at Radboud University in the Netherlands.
Beyond its closed models, many of the company’s most high-profile features are opaque. After the botched coup attempt, the reasons for which are still unclear, the new board, on behalf of the company, has a lot of trust to regain from employees, investors, and customers. To be clear -- there’s no world where OpenAI returns to its 2015 form, but there are a few things the company can do to embody the open in OpenAI:
Release the full results of the investigation.
OpenAI’s new board has hired law firm WilmerHale to investigate the events that led up to Sam’s ouster. OpenAI should make those findings public. Given the cryptic nature of Sam’s firing, both he and the company would benefit from giving the world more insight into what happened.
Get clear on the CEO’s equity.
Does Sam Altman own equity in OpenAI or not? Despite repeated public statements that he doesn’t, ample evidence suggests he does. As I previously wrote, “I suspect he does, but either answer is a red flag. He either owns equity in the company and is disingenuous when he says he doesn’t, or he doesn’t own equity, which would again illustrate a fundamental misalignment with the company’s employees and investors.”
OpenAI is a private company and, thus, is not required to disclose the details of its cap table, but being clear about whether the CEO has a direct or indirect equity stake would help regain some of that trust.
Related posts: