In this YouTube short I show the ChatGPT-like text generation which is available since Nextcloud 26 through the smart picker menus.
Just like Nextcloud, the A.I. models and server are self-hosted using LocalAI. The problem with this approach is that using a CPU to generate text or pictures is very very slow: expect between 2 to 3 minutes for each response. The Nextcloud frontend is not happy about this, in-fact its default timeout is only 6 seconds. This messes up the LocalAI Docker container which needs to be restarted. Another problem is that not all models are supported by standard CPUs.
So, if you have the right hardware and find the right A. I. models maybe you can get something useful from these new tools.
This remains only an experiment for me at the moment.