What is FOSS answer to BingGPT & Google Bard?

There was a post asking people their opinions about Edge and many people seemed to liked the idea of Edge and seemed to be ok having it on Linux (Blasphemy)

Also, can we all agree how fast Edge went from joke to a threat? I mean, it’s good now alright! It was good back then, but it’s better now. Money man!!! Money! Personally I hate MS, but I can’t help but see the fact that there is no alternative to Bing GPT and many features Bing offers on Linux.

If there is an open source ChatGPT how would it look? Who would bear the costs? How would we solve the server problem? i.e., it would take a ton of server space and bandwidth. Just wondering.

I am pretty sure MS products will improve greatly due to their integration with GPT what do us poor folks on Linux do?

Just want to know the answers, I don’t want to discuss (aka can’t comment, I need to study), but just curious!

moggers,

Bavarder is good. Works on Linux too

wiki_me, (edited )

Chatbot arena ranks LLM, including some truly open source ones (apache license)

Regarding UI, maybe look here?

juliebean,

i don’t know a single thing about whatever these things your talking about are, but i just want to say, you’re the first person i’ve ever seen suggest that edge isn’t a joke.

drwho,
@drwho@beehaw.org avatar

There are quite a few of them out there. Just a few from my notes (in reverse chronological order)

The self-hosted AI/ML system has been here for a while. Granted, the vast majority of them require downloading pre-trained models before they can be used due to how much it costs to build a system of weights from scratch.

I sometimes wonder if it would be possible to build and train a truly open source model with BOINC or something. The last 30 years of history show that it’s entirely feasible to build a massively distributed computing cluster, why not leverage this to build a model? I know how naive that sounds immediately after writing it, mostly because of the difficulty of getting a large enough training data set, which unfortunately has risk written all over it (read: people poisoning the model, ala Microsoft’s experiment with Tay on birbsite some years back).

Nicbudd,

I installed Edge on Linux because it’s fun watching Linux people get angry about it

QuazarOmega,

I admire your trolling power

PlexSheep,

You’re a monster

Canuck,

OpenAssistant

lloram239, (edited )

what do us poor folks on Linux do?

Run llama.cpp and any of the models listed here, that stuff has been around for months.

TheBloke has a lot of models converted to GGUF format which you need for llama.cpp.

Quick Start Guide (requires Nix, otherwise compile llama.cpp manually):


<span style="color:#323232;">$ GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/TheBloke/guanaco-7B-GGUF
</span><span style="color:#323232;">$ cd guanaco-7B-GGUF
</span><span style="color:#323232;">$ git lfs pull --include=Guanaco-7B.Q4_0.gguf
</span><span style="color:#323232;">$ nix run github:ggerganov/llama.cpp -- -m Guanaco-7B.Q4_0.gguf --instruct
</span><span style="color:#323232;">> Write haiku about a penguin
</span><span style="color:#323232;"> A penguin walks on ice,
</span><span style="color:#323232;"> Takes a plunge in the sea,
</span><span style="color:#323232;"> Hides his feet from me!
</span>
RickyRigatoni,
@RickyRigatoni@lemmy.ml avatar

a package manager that can pull, build, and run from git with one command is pretty neat

257m,

I ran it on my pc with a gtx 1070 with cuda enabled and compiled with the cuda compile hint but it ran really slowly how do you get it to run fast?

lloram239,

To make use of GPU acceleration you have to compile it with the proper support (CUDA, OpenCL, ROCM) and add –gpu-layers 16 (or a larger number, however much your VRAM can handle). If that’s not enough, than the GPU/CPU is probably to slow.

You can try a smaller model, those run faster, but give worse results.

257m,

Thanks I might try that out later.

makeasnek,
@makeasnek@lemmy.ml avatar

petals.dev It’s basically bittorrent for AI. A public swarm for running and training LLMs.

pedroapero,

Wao didn’t know this one, impressive thx for sharing !

pezhore,
@pezhore@lemmy.ml avatar

At work today, I just heard the term, “Federated Machine Learning” - where individual corporations keep their proprietary learning data, but make the models shareable. I wonder if something similar could be done with activitypub?

(Feel free to steal this idea if it’s worth stealing)

KRAW,
@KRAW@linux.community avatar

This doesn’t work when an important part of the process is making sure your data is actually good. If the data is proprietary, there is no way to make sure it is usable.

Infiltrated_ad8271,
Infiltrated_ad8271 avatar

For images there are foss options that you can run locally, like stable diffusion which is so good that it rivals its proprietary counterparts.
But for text it's a horror, there are some you can try (see gpt4all), but in general chatgpt has no real competition; the foss options are currently very bad, and even the proprietary options from big corporations like bard or llama are pitiful.

but I can’t help but see the fact that there is no alternative to Bing GPT

Regarding this, there are other services that also use GPT for search, here a couple:

https://www.perplexity.ai/

https://www.phind.com/

ShittyRedditWasBetter,

Like, have you even tried a search or even attempted to look up anything? There is a lot. Most of it boils down to stay under free limits of street paying for compute and run your own. It’s actually very trivial to run a slightly low effective model on a 4090. Liked you can be up and running inside a few minutes if you already know pip

mriormro,
@mriormro@lemmy.world avatar

Such a wildly helpful comment. Keep on perpetuating the type of attitude that this community is known for!

Gobbel2000,
@Gobbel2000@feddit.de avatar

man -k

sxan,
@sxan@midwest.social avatar

That’s just garden-variety intelligence, nothing artificial about it.

SomeBoyo,

Host it locally github.com/nomic-ai/gpt4all

Tibert,

Maybe something like this? huggingface.cohuggingface.co/chat/

The model seems to be based on the Llama model (but not only, there seem to be multiple models). Tho I didn’t really look into it on how they get their database. It also has more hallucinations than chatgpt.

marx2k,

“What’s the date?”

I’m not able to provide you with the current date as I am a text-based AI language model and do not have access to real-time information. However, I can tell you the date based on my knowledge cutoff, which is December 2022. Please note that this information may not be up-to-date or accurate. Is there anything else I can help you with?

Tibert,

You won’t get the date.

The date mentioned in the answer may also be an halucination from the bot, depending on the dataset.

And an answer to Bing chat / bard is pretty hard to do if something like this even exists, as the bot has to be able to search the Web.

marx2k,

Interesting. I was able to pull the date from www.perplexity.ai however

Tibert,

Because it’s a search engine using gpt3. Nothing special/Foss about it. Or at least I don’t see any Foss sources.

PlexSheep,

Huggingchat is pretty good. And the model is Open source. No need for a phone number in your account and not by a company whose name is hypocritical even.

db0,
@db0@lemmy.dbzer0.com avatar

As others said you can host Koboldai on your own, and if you don’t have a powerful GPU, anyone can use powerful llm models via the AI horde

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • ethstaker
  • DreamBathrooms
  • InstantRegret
  • tacticalgear
  • magazineikmin
  • Youngstown
  • thenastyranch
  • mdbf
  • slotface
  • rosin
  • modclub
  • kavyap
  • cubers
  • osvaldo12
  • JUstTest
  • khanakhh
  • cisconetworking
  • Durango
  • everett
  • ngwrru68w68
  • Leos
  • normalnudes
  • GTA5RPClips
  • tester
  • megavids
  • anitta
  • provamag3
  • lostlight
  • All magazines