When Elon Musk announced the team behind his new artificial intelligence company xAI Last month, whose mission is said to be “understanding the true nature of the universe,” he emphasized the importance of responding to existential concerns about the promise and perils of artificial intelligence.
Whether the newly formed company is actually able to align its behavior to reduce the potential risks of the technology, or whether it is merely aiming to gain an advantage over OpenAI, its formation raises important questions about how companies will actually respond to concerns about AI. especially:
- Who internally, especially at the largest startup model companies, is asking questions about the short- and long-term impacts of the technology they are building?
- Do they come to the issues with the right lens and experience?
- Do they appropriately balance technological considerations with social, ethical and cognitive issues?
In college, I majored in computer science and philosophy, which seemed like an incongruous combination at the time. In one classroom, I was surrounded by people thinking deeply about ethics (“What is right, and what is wrong?”), ontology (“What is really out there?”), and epistemology (“What do we actually know?”). In another case, I was surrounded by people doing algorithms, code, and math.
Twenty years later, in a stroke of foresight, this mix is no longer so harmonious in the context of how companies need to think about AI. The risks posed by the impact of AI are existential, and companies must make a real commitment that is worth those risks.
Ethical AI requires a deep understanding of what exists, what we want, what we think we know, and how intelligence unfolds.
This means equipping their leadership teams with stakeholders properly equipped to sort out the consequences of the technology they’re building — which goes beyond the natural expertise of engineers writing code and powering APIs.
AI is not an exclusive computer science challenge, a neuroscience challenge, or an improvement challenge. It is a humanitarian challenge. To address this problem, we need to embrace a permanent version of “AI meeting of the minds,” which is equal to its scope Oppenheimer Interdisciplinary Group In the New Mexico desert (where I was born) in the early 1940s.
The collision of human desire with the unintended consequences of artificial intelligence leads to what researchers call the “compatibility problem,” which has been expertly described in Brian ChristianThe book “The Problem of Alignment”. Basically, machines have a way of misinterpreting our most comprehensive instructions, and we, as their so-called masters, have a poor track record of getting them to fully understand what we think we want them to do.
The bottom line: Algorithms can foster bias and misinformation, thus eroding the fabric of our society. In a more dystopian long-term scenario, they could take the “scenario”A treacherous turn“And the algorithms to which we have ceded so much control over the workings of our civilization are outpacing us all.
Unlike Oppenheimer’s challenge, which was scientific, ethical AI requires a deep understanding of what exists, what we want, what we think we know, and how intelligence unfolds. This is certainly analytical work, although not of a purely scientific nature. It requires an integrative approach rooted in critical thinking from both the humanities and sciences.
Thinkers from different fields need to work closely together now more than ever. A dream team for a company looking to get this right would look like:
- Senior AI and Data Ethics Expert: This person will address short- and long-term issues related to data and AI, including but not limited to, formulation and adoption of ethical data principles, development of reference architectures for ethical data use, citizens’ rights regarding how their data is consumed and used by AI, and protocols for shaping AI behavior artificial and appropriately controlled. This should be separate from the CTO, whose role is largely to implement the technology plan rather than address its implications. It is a senior role in the CEO staff that bridges the communication gap between internal decision makers and regulators. You cannot separate the data ethicist from the chief AI ethicist: data is the precondition and fuel for AI; Artificial intelligence itself generates new data.
- Chief Engineer Philosopher: This role will address long-term existential concerns with a primary focus on the “alignment problem”: how to define safeguards, policies, backdoors and kill switches for AI to align it as closely as possible with human needs and goals. .
- Chief neuroscientist: This person will address critical questions regarding consciousness and how intelligence unfolds within AI models, which models of human cognition are most relevant and useful for developing AI, and what AI can teach us about human cognition.
Crucially, to transform the Dream Team’s deliverables into responsible and effective technology, we need technology experts who can translate the abstract concepts and questions posed by the Three into working software. As with all working technology groups, this depends on the product lead/designer seeing the whole picture.
A new generation of innovative product leaders in the “AI Age” must move comfortably through new layers of the technology stack that includes modular AI infrastructure, as well as new services for things like fine-tuning and proprietary model development. They must be innovative enough to imagine and design a “human-in-the-loop” workflow to implement safeguards, backdoors, and kill switches as described by the Chief Philosopher Engineer. They need to have the ability of a renaissance engineer to translate the head of AI and data ethics policies and protocols into business systems. They need to appreciate the efforts of leading neuroscientists to navigate between machines and minds and appropriately characterize the results with the potential for a smarter, more responsible artificial intelligence to emerge.
Let’s look at OpenAI as one early example of a cutting-edge, highly influential prototypical company struggling with this hiring challenge: they have Chief Scientist (He is also their co-founder), A Head of Global Policyand A General Counsel.
However, without the three positions I outlined above in executive leadership positions, the biggest questions surrounding the implications of their technology remain unaddressed. If Sam Altman Worried On approaching and coordinating superintelligence in an expansive and thoughtful way, building a comprehensive lineup is a good place to start.
We must build a more responsible future where companies are trusted stewards of people’s data and where AI-driven innovation is synonymous with good. In the past, legal teams led the charge on issues like privacy, but their brightest realize they can’t solve the problems of ethical data use in the age of AI alone.
Bringing broad-minded and different perspectives to the table where decisions are made is the only way to achieve ethical data and AI in the service of human flourishing – while keeping machines in their place.