This probably would deserve a separate thread, but basically the main limitation of the PIs for running LLMs is the ram (a weak processor means you either lower the resolution, or run it slower, but if you cannot fit the model in memory, there’s nothing you can really do).
Microsoft unveiled a model that was decently capable yet was able to run in about 4GB of RAM, which means a Pi could reasonably run phi3 and the only constrain would be speed.
Still, I don’t know much about this model capabilities of ingesting new data, and that usually takes time and power.