In our inaugural AIFoundry Podcast, we examine OLMo, the Open Language Model, a groundbreaking release from the Allen Institute for Artificial Intelligence. Yulia Yakovleva, our esteemed Machine Learning Specialist in Residence, dissects the paper "OLMo: Accelerating the Science of Language Models,” drawing intriguing comparisons to the Llama 2 LLM paper to underscore OLMo's unique openness. She then treats us to a live demonstration of OLMa in action on her trusty old Dell laptop.
We appreciate your patience with the blurry screenshots as we were still learning how to use our stream-casting software. Rest assured, we're working to improve the visual quality for your viewing pleasure in future podcasts. Luckily, the audio quality is good.
Here are some further links and shortcuts from last week’s episode:
- Hugging Face Open LLM Leaderboard - view in replay at 34:15
- Github for Llama.cpp for running LLMs locally on your laptop - view in replay at 43:49
- Model card for OLMo 7B on Hugging Face - view in replay at 44:55
- Looking up different quantized model versions of OLMo 7B - view in replay at 45:25
- Choosing specific implementation of OLMo - view in replay at 46:12
Join us for our next scheduled broadcast on Thursday, Jun 27, 2024 at 11AM Pacific. Yulia and I will discuss the foundational paper “Evaluating Large Language Models: A Comprehensive Survey” by Zishan Guo and colleagues, as well as the latest in LLM benchmarks, and what really matters in trying to measure LLMs.
To be part of our online studio audience, please join the AIFoundry.org Discord.
Subscribe to the AIFoundary.org calendar on Luma to stay updated with upcoming community podcasts and events.
Feel free to drop a comment to this blog below.