Building an open ecosystem for AI
We are a community of practitioners building an open-source composable AI ecosystem. By collaborating on standards for everyone, we aim to reduce the complexity of the AI industry.
Building an open ecosystem for AI
We are a community of practitioners building an open-source composable AI ecosystem. By collaborating on standards for everyone, we aim to reduce the complexity of the AI industry.
What's happening in the community
Upcoming and past podcasts, virtual and physical community events for AIFoundry.org
Evaluating LLMs - recent practice and new approaches
Yulia Yakovleva will join us with her review of the the foundational paper “Evaluating Large Language Models: A Comprehensive Survey” by Zishan Guo and colleagues.
OLMo-Llamafile Podcast at Mozilla.ai
Hosted at the Mozilla.ai Community Discord.
OLMo stands for Open Language Model - a fully open source LLM released by the Allen Institute. Recently the team at AIFoundry.org packaged OLMo with Llamafile to enable developers to run OLMo locally.
AIFoundry.org Podcast - AI Paper Book Club
A bi-monthly read-out of compelling AI Papers. Paper TBD.
AI Hack Lab (Virtual) - Creating an Open Source Automated Prompt Testing Suite
At AI Hack Lab #1, Paul Zabelin of Artium demonstrated that automating testing to achieve greater than 99% accuracy in AI prompts within a specific application domain is possible.
Paul has graciously agreed to help the AIFoundry.org community create a generalized open source prompt testing engine that anyone can use for any model.
AIFoundry.org Podcast - AI Paper Book Club
Bi-monthly reading and sharing of compelling AI Papers. Paper TBD.
“Reviewing OLMo: Accelerating the Science of Language Models” - AI Paper Book Club (recap)
In our inaugural AIFoundry Podcast, we examined OLMo, the Open Language Model, a groundbreaking release from the Allen Institute for Artificial Intelligence.
AI Hack Lab #1 (recap)
The team at Nekko.ai with friends and colleagues, explored the question, “Can we make developing AI applications more compatible with agile programming?”
Help bring rigor to AI development
Join an AI Hack Lab and work with members of the AI community as we rethink how to build AI models and apps.
View and join our open-source projects
AIFoundry.org hosted open source projects are Apache V2 licensed. View the code on the AIFoundry.org Github, and collaborate with project contributors in the AIFoundry.org Discord Server.
Llamagator
Llamagator (your LLM aggregator) is a multi-LLM prompt testing tool.
- Test prompts against multiple LLMs or LLM versions
- Observe the relative performance of generated responses
- Run prompts multiple times to observe the reliability
- Supports local and API access to LLMs
- Licensed under Apache License, Version 2.0
$blame.ai
$blame.ai is a demo app for testing AI infrastructure from LLMs to inference engines.
We're building an open data set, application code, and reference implementation that any developer can use to showcase their AI infrastructure.
Quantization-Aware Training
A community research project to investigate data-free quantization-aware training.
What's happening in the community
Upcoming podcasts, virtual and physical community events for AIFoundry.org.
View recaps of our past events
Summaries and video replays of prior AI Hack Labs and Podcasts.
- Yulia Sadovnikova
- November 2, 2024
- Yulia Sadovnikova
- November 1, 2024
AIFoundry.org is building a community with
Openness
We don't just mean you can use our tools, we mean you are free to use it however you want.
Integrity
Honesty, ethics, and follow-through foster trust within the community.
Empathy
Our community members are mindful and curious about other people’s motivations and ideas and help each other with compassion and kindness.
Cooperation
Our community seeks win-win solutions with mutually beneficial outcomes.
How to reach AIFoundry.org
Frequently Asked Questions about AIFoundry.org
We believe that economically viable, decentralized, self-governed, developer-centric "Open Source AI" alternatives are essential and downright critical for the future of business and society.
Machine learning and application development are entirely different disciplines with disparate communities, practices, and tools. The current state of the practice for application development includes continuous integration and deployment, treating configuration as code, and cross-functional integrated teams that own all aspects of delivering business value to production. The current state of the practice for machine learning engineers includes dealing with large datasets, model evaluation, and exploratory analysis tools like Jupyter notebooks. These divergent practices often end up causing friction in application development.
We do, but just like with DevOps (when the wall between Developers and Operations was finally broken), practices and tooling make a difference. As AI reshapes the industry, we believe bringing the two disciplines of machine learning and application development together is critical. The two communities have much to learn from one another. Further, we believe cross-functional teams with representation from both specialties will be more productive and yield more significant innovation. Therefore, we believe that increasing the collaboration between the two communities will create the conditions to improve both the practices and the tooling across the board.
AI Hack Labs are similar to unconferences since the agenda is fluid and determined by the participants. However, an AI Hack Lab involves hands-on work like a Hack Room at FOSDEM. Participants are expected to bring their laptops.
Consider just one tooling example: although it is theoretically possible to version Jupyter notebooks in Git, in practice, the default configuration doesn’t play nicely with diff. Noisy diffs break the usual Git pull request workflow.