@oscarascal pure local STT engines are very hard to implement locally on a smartphone because it takes a lot of resources (RAM in particular). But I agree, it would probably be the same with a local LLM 🫤
@gael we need an open source, collaboratively trained LLM, on commons text data that is explicitly opt-in, and local inference API on mobiles and desktops (there may be more opportunities to use LLM's integrated in various desktop apps such as Libreoffice)
@phiofx Yes. The thing with collectively trained is privacy concerns. To me that can only be solved with homomorphic encryption, but consumer hardware is not powerful enough for this yet.
@gael LLM are bullshit generator. They know nothing, they don’t know what are knowledge and truth… The only thing they do is generate statically plausible text : It’s THE definition of bullshit.
@gael No because LLM are unable to evaluate relevance and accuracy of information they retrieve… They don’t retrieve information in fact, They only use an enormous quantity of text to generate another plausible text based on a statistical evaluation, and only that. They can't retrieve information because they don't know what information is.
Add comment