Axiom Distributed AI is a volunteer distributed computing project that needs your help to train a massive neural network.
Why Axiom Distributed AI?
Axiom uses a distributed, biologically inspired Hebbian learning approach rather than traditional backpropagation.
Why this topic/object of study?
Goal
The current goal is to prove distributed Hebbian learning works. That is research worthy. In the long term, a model trained on diverse patterns like this could serve as the foundation for other AI applications. Right now it's really a proof of concept.
Methods
Axiom trains to recognize patterns in data using your computer's spare processing power. Rather than using massive data centers for training this AI, we distribute that work across many volunteers. If you model learns better, you can watch this progress on the Axiom website.
1. This is not training an LLM. It's leaning bit-lever compression patterns. (like learning the next bit in a stream)
2. The correlation-based credit system naturally down-weights garbage data - random noise won't correlate with other contributors' gradients, so it contributes less to the model.
3. On the website you can see the mode's progress. When bit loss starts trending down - that's proof that the model is learning patterns instead of noise.
4. Diverse data actually helps generalize across data types.
For Windows hosts, place files you want to contribute to training in: %USERPROFILE%\Axiom\contribute\
For Linux Mint 22.3 place files in: \boinc\Axiom\contribute\
You can put any files in there. documents, images, code, PDFs. Maybe even real-time data if you want to creative with that.
Open source development on GitHub: https://github.com/PyHelix/boinc-Axiom
Project team / Sponsors
PyHelix, Axiom Project
