Skip to content

Reviewing OLMo: Accelerating the Science of Language Models

  • June 18, 2024

In our inaugural AIFoundry Podcast, we examine OLMo, the Open Language Model, a groundbreaking release from the Allen Institute for Artificial Intelligence. Yulia Yakovleva, our esteemed Machine Learning Specialist in Residence, dissects the paper "OLMo: Accelerating the Science of Language Models,” drawing intriguing comparisons to the Llama 2 LLM paper to underscore OLMo's unique openness. She then treats us to a live demonstration of OLMa in action on her trusty old Dell laptop.

We appreciate your patience with the blurry screenshots as we were still learning how to use our stream-casting software.  Rest assured, we're working to improve the visual quality for your viewing pleasure in future podcasts. Luckily, the audio quality is good.

Here are some further links and shortcuts from last week’s episode:

Join us for our next scheduled broadcast on Thursday, Jun 27, 2024 at 11AM Pacific.  Yulia and I will discuss the foundational paper “Evaluating Large Language Models: A Comprehensive Survey” by Zishan Guo and colleagues, as well as the latest in LLM benchmarks, and what really matters in trying to measure LLMs.

To be part of our online studio audience, please join the AIFoundry.org Discord.

Subscribe to the AIFoundary.org calendar on Luma to stay updated with upcoming community podcasts and events.

Feel free to drop a comment to this blog below.