The Latest Trends in AI: Legal Regulations, Educational Restrictions, and Sam Altman Controversy

The artificial intelligence (AI) industry began 2023 with a bang as schools and universities struggled with students using OpenAI’s ChatGPT to help them with homework and essay writing. Less than a week into the year, New York City Public Schools banned ChatGPT – released weeks earlier to enormous fanfare – a move that would set the stage for much of the discussion around generative AI in 2023.

As the buzz grew around Microsoft-backed ChatGPT and rivals like Google’s Bard AI, Baidu’s Ernie Chatbot and Meta’s LLaMA, so did questions about how to handle a powerful new technology that had become accessible to the public overnight. While AI-generated images, music, videos, and computer code created by platforms such as Stability AI’s Stable Diffusion or OpenAI’s DALL-E opened up exciting new possibilities, they also fueled concerns about misinformation, targeted harassment, and copyright infringement.

In March, a group of more than 1,000 signatories, including Apple co-founder Steve Wozniak and billionaire tech entrepreneur Elon Musk, called for a pause in the development of more advanced AI in light of its “profound risks to society and humanity”. While a pause did not happen, governments and regulatory authorities began rolling out new laws and regulations to set guardrails on the development and use of AI.

After ChatGPT amassed more than 100 million users in 2023, developer OpenAI returned to the headlines in November when its board of directors abruptly fired CEO Sam Altman – alleging that he was not “consistently candid in his communications with the board”. Although the Silicon Valley startup did not elaborate on the reasons for Altman’s firing, his removal was widely attributed to an ideological struggle within the company between safety versus commercial concerns. Altman’s removal set off five days of very public drama that saw OpenAI staff threaten to quit en masse and Altman briefly hired by Microsoft, until his reinstatement and the replacement of the board.

In a survey of 305 developers, policymakers, and academics carried out by the Pew Research Center in July, 79 percent of respondents said they were either more concerned than excited about the future of AI, or equally concerned as excited. Despite AI’s potential to transform fields from medicine to education and mass communications, respondents expressed concern about risks such as mass surveillance, government and police harassment, job displacement, and social isolation.

In December, European Union policymakers agreed on sweeping legislation to regulate the future of AI, capping a year of efforts by national governments and international bodies like the United Nations and the G7. Key concerns include the sources of information used to train AI algorithms, much of which is scraped from the internet without consideration of privacy, bias, accuracy, or copyright. The EU’s draft legislation requires developers to disclose their training data and compliance with the bloc’s laws, with limitations on certain types of use and a pathway for user complaints.

Questions about the future of AI are also rampant in the private sector, where its use has already led to class-action lawsuits in the US from writers, artists, and news outlets alleging copyright infringement. Fears about AI replacing jobs were a driving factor behind months-long strikes in Hollywood by the Screen Actors Guild and Writers Guild of America. In March, Goldman Sachs predicted that generative AI could replace 300 million jobs through automation and impact two-thirds of current jobs in Europe and the US in at least some way – making work more productive but also more automated.

The year 2024 will be a major test for generative AI, as new apps come to market and new legislation takes effect against a backdrop of global political upheaval. Over the next 12 months, more than two billion people are due to vote in elections across a record 40 countries, including geopolitical hotspots like the US, India, Indonesia, Pakistan, Venezuela, South Sudan, and Taiwan. While online misinformation campaigns are already a regular part of many election cycles, AI-generated content is expected to make matters worse as false information becomes increasingly difficult to distinguish from the real thing and easier to replicate at scale.

Next
Previous