Our human tendency towards bias can become entangled with the networks and institutions we create.
On November 30, 2022, OpenAI launched ChatGPT, an artificial intelligence chatbot. Since its launch, ChatGPT has already done much to influence how we work, communicate, and think about the role of artificial intelligence in our lives. The speed at which this influence has been felt calls to mind the early days of the internet as we engage with an emerging, potentially world-changing technology. The speed of AI development has raised concerns about AI safety, including the worry that AI could become too powerful too quickly, generating humanlike consciousness with the power to outpace us, in a scenario echoing many a science fiction film. While AI likely has a long way to go before it reaches this stage, if indeed it ever does, in one respect it may already have shown signs of being all-too human. As users familiarized themselves with ChatGPT, concerns have emerged about what appears to be a political bias in its “thinking.” Recent analysis found ChatGPT to have a “pro-environmental, left-libertarian ideology.” In a commentary for Brookings, Jeremy Baum, and John Villasenor wrote of testing ChatGPT by asking it a range of questions about political issues. OpenAI CEO Sam Altman has commented on the issue, saying that ChatGPT has “shortcomings around bias” which the company is “working to improve.” These biases reflect more than just the growing pains of a new technology. They reflect how a tendency towards bias—to which we are all susceptible—can become embedded within the systems we create.
Read more here.