News
In the AI Loop

AI in 2024: Major Moments and Insights

By
No items found.
December 20, 2024
Share this post
AI in 2024: Major Moments and Insights

AI continued to make waves in 2024, marked by groundbreaking advancements, policy milestones, and ethical challenges.

The European Union enacted its first major AI regulation, the White House mandated Chief AI Officers for federal agencies, and Massachusetts formed an AI Task Force featuring Northeastern leaders, including Executive Director Usama Fayyad. AI researchers earned Nobel Prizes, and the technology drove remarkable breakthroughs. It also sparked concerns, including the disbanding of Responsible AI teams at major players like OpenAI. 

The Institute for Experiential AI excelled with impactful research, Responsible AI training, innovative industry partnerships, and our convening of leaders to shape the future of AI in precision health and education

Major Moments:

  • OpenAI closed its “AGI Readiness” team, highlighting the growing importance of third-party Responsible AI services.
  • The EU enacted the first major regulation targeting the use of AI.
  • AI researchers won Nobel Prizes in physics and chemistry.
  • AI-driven fraud went mainstream, helping criminals create fake identities, clone voices, and send sophisticated phishing emails.
  • Mass. Governor Maura Healey established an AI task force, including Northeastern leaders like our Executive Director Usama Fayyad.
  • The White House issued guidelines requiring federal agencies to name Chief AI Officers to manage AI risks.
  • UNESCO unveiled AI competency frameworks for education.
  • We convened leaders to shape the future of AI in precision health and education at our October conference and workshop at Northeastern.
  • Google unveiled an AI system that outperformed the world’s best weather forecasts for deadly storms.
  • Purchase of generative AI solutions declined relative to 2023.
  • The “Generative AI Wars” signaled tough times ahead for OpenAI, Microsoft, and those worried about a generative AI bubble.
  • AI advanced hospital-at-home care, driven by a federal waiver allowing at-home models to qualify for acute care reimbursement.

Insights:


Mass. Governor formed an AI Task Force to drive economic growth.

“Governor Healey’s AI Task Force asks an important question about developing an AI economy: What does it take to leverage the strengths that are natural and available here in the state of Massachusetts? In addition to general manufacturing, fisheries, and blue tech, Massachusetts has a huge strength in life sciences. While AI is talked about a lot, we haven't yet figured out how it can accelerate local industries. How can we get to breakthroughs faster? How can we evaluate the effectiveness of new antibodies, new molecules, new therapies faster? Thanks to AI, we’re nearing a Golden Age of breakthroughs in life sciences."

- Usama Fayyad, Executive Director


OpenAI disbanded its AI ethics team.

“For me, the most striking moment of 2024 was OpenAI’s decision to close its “AGI Readiness” team. Last year, we saw companies like Google, Twitch, and Microsoft lay off their own AI ethics teams, so it should come as no surprise that OpenAI did the same. More telling is that the senior advisor on the team said he believed his research would be less biased and more impactful externally. The claim shows two things: on the one hand, companies are ill-equipped to police themselves when it comes to AI’s more harmful side. On the other, that third-party ethics advisory groups and Responsible AI practices are the future.”

- Cansu Canca, Director of Responsible AI Practice


The European Union enacted the world’s first comprehensive AI law.

“The European Artificial Intelligence Act (AI Act) entered into force on Aug. 1. This regulation on the use of AI will have an impact in the rest of the world, as many countries will copy it—the good and the bad alike. The prohibition of AI applications that exploit vulnerabilities is a good thing, but the AI Act only considers AI, and any software can have a negative impact—the British Post Office Horizon scandal being a good example.”

- Ricardo Baeza-Yates, Director of Research


AI researchers won the Nobel Prize in chemistry.

“The most important AI story was, without a doubt, the Nobel Prize in chemistry being awarded to Demis Hassabis and John Jumper for their work on AlphaFold related to predicting protein structure. This award was important because it brought the role of AI in advancing our understanding of living systems to the top of everyone's mind, and also because their work relied on well-curated, open-source data on protein structure."

- Sam Scarpino, Director of AI + Life Sciences


Fraudsters and identity thefts found a new ally in AI

“The uptick in stories about AI-related fraud is something that marks 2024, and with the political changes about to unfold, we should expect to see more fraud across modalities — text, email, voice, and even video. I don’t believe that technology alone can solve this. We need to educate students and the general public about the signs of tampered media. It’s analogous to public health, in that we need to build critical media skills and digital hygiene. This is not just about generative AI getting more human-like; we have become so dependent on immediate information access that we sacrifice convenience for careful vetting. My hope is that we build more resilient communities by relying on human-human knowledge and novel human-in-the-loop solutions to address these issues."

- Rupal Patel, Affiliate Faculty


UNESCO released important guidelines on AI use for teachers and students.

"The UNESCO competency frameworks for students and teachers didn't make many headlines, but I think it was a big development for educators and learners around the world. I think it marks a transition away from panic about AI in educational settings. The frameworks give guidance educators on how organizations can adapt to this transformational technology while also considering what should remain the same. I'm not saying the frameworks are perfect or that they won't evolve, but the global effort to draft, publish, and disseminate such guidance was a consequential event for educating humans in a world with AI, particularly in the global south."

- Joe Doiron, Associate Director, Education Programs, The Institute for Experiential AI


A Google AI model bested the world’s best weather forecasts.

“AI models are beginning to outperform weather prediction across scales and lead time. Most notably, Google unveiled an AI system this year that outperformed the world’s best weather forecasts for deadly storms. It was the culmination of multiple efforts at Google DeepMind, and the advance is certainly eye-catching, although more evaluation is needed to build trustworthiness. The AI for Climate and Sustainability practice (AI4CaS) is well positioned to build reliable physics-based climate modeling that can improve accuracy and even save lives. A recent example is our NASA-funded joint work with the Tennessee Valley Authority; the machine learning startup Zeus AI, which has Northeastern roots; and with subject matter experts from the Oak Ridge National Laboratory and the the non-profit Research Triangle Institute, where we co-evaluated the trustworthiness of recent breakthroughs in precipitation nowcasting, first by Google DeepMind, and then by a couple of universities."

- Auroop Ganguly, Director of AI for Climate and Sustainability


Enterprise AI challenges grew while solutions got cheaper.

“The challenges of recognizing ROI and enterprise adoption are growing. The preference for building over buying is accelerating, favoring builders. The comparison between small language models and larger, more powerful models is gaining significant momentum. In 2023, 80% of Generative AI solutions were purchased. However, in 2024, that number dropped to 50%, despite an eightfold increase in spending. This indicates several major trends. First, organizations are gaining a clearer understanding of the importance of protecting their data and recognizing the need to educate their workforce on its responsible use. Second, there is growing skepticism regarding the Magnificent Seven's responsible use of proprietary data. Finally, it has become less expensive to develop these solutions. EAI can support businesses in building and educating their teams about the responsible use of AI while also providing customized training that emphasizes the importance of keeping a 'human-in-the-loop.' This approach serves as a significant differentiator.”

- Tim Weidinger, Director of Business Development


AI-assisted At-home healthcare hit its stride.

"The biggest AI development of 2024 continues to be ChatGPT and RAG (Retrieval-Augmented Generation) approaches to improve the performance of language models, particularly for chatbot interactions. However, there was another development that has gone under the radar, and that is the growing partnership between a new Best Buy division, called Best Buy Health, and major healthcare organizations' hospital-at-home programs. The Centers for Medicare and Medicaid Services (CMS) waiver that would allow hospital-at-home models to be reimbursed as an acute care service is being reviewed by congress. If the bill passes, it will have game-changing impacts on remote acute care delivery, creating unique partnerships between companies who develop and sell remote patient monitoring devices and healthcare organizations. This creates a new opportunity for AI that hasn't yet taken hold."

- Eugene Tunik, Director of AI + Health

Learn more about how Northeastern, through its experiential learning programs, executive education, Responsible AI masterclasses, and industry workshops, is shaping the future of AI education.

Talk with an expert here.