AsAnAILanguageModel

@AsAnAILanguageModel@sh.itjust.works

This profile is from a federated server and may be incomplete. Browse more on the original instance.

AsAnAILanguageModel,

UX is not primarily about how your project looks like, but about how easy it is for humans to interface with it.

On the other hand, user interfaces that are difficult to read or have misleading layouts can seem ugly.

I can recommend the book “the gamer’s brain” by Celia Hodent. Maybe this blog post of hers can give you a rough idea what the book will cover. Although she focuses on games, the lessons are universal.

AsAnAILanguageModel,

It is often a little depressing for Italian women when they move to Northern Europe, because the lack of people aggressively hitting on them makes them feel unattractive.

AsAnAILanguageModel,

Yeah I think it’s mostly a meme now. Either you read comments from people who loved it, or jokes from people who haven’t played it. I had no expectations before playing it and liked it so much that I even preordered the DLC, to show my support. (I don’t care about the preorder bonus, and I don’t think preordering games is reasonable, but I’m gonna play it right away anyway, so it doesn’t matter in this case)

Hugging Face Releases IDEFICS: An Open-Access 80B Visual Language Model Replicating DeepMind's Flamingo

Hugging Face released IDEFICS, an 80B open-access visual language model replicating DeepMind’s unreleased Flamingo. Built entirely on public data, it’s the first of its size available openly. Part of its training utilized OBELICS, a dataset with 141M web pages, 353M images, and 115B text tokens from Common Crawl....

AsAnAILanguageModel,

I tried the demo for a bit and it makes mistakes every time, but gets enough things right to be promising! I wonder how this will evolve in the coming months.

AsAnAILanguageModel,

That looks really cool! Is there a demo one could try somewhere?

Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback (arxiv.org)

A key technology for the development of large language models (LLMs) involves instruction tuning that helps align the models' responses with human expectations to realize impressive learning abilities. Two major approaches for instruction tuning characterize supervised fine-tuning (SFT) and reinforcement learning from human...

AsAnAILanguageModel,

Thanks, it’s great to have more multilingual models! It’s a little surprising that RLHF outperforms SFT so consistently in their experiments. I guess it’s worth it after all.

AsAnAILanguageModel,

Impressive! There are more examples here and the code repository here.

AsAnAILanguageModel,

Tried the q2 ggml and it seems to be very good! First tests make it seem as good as airoboros, which is my current favorite.

AsAnAILanguageModel,

I think that’s a very relevant comment, and I also got spooked by this before I ran it. But I noticed that the GitHub repo and the huggingface repo aren’t the same. You can find the remote code in the huggingface repo. I also briefly skimmed the code for potential causes of the memory leak, but it’s not clear to me what’s causing it. It could also be PyTorch or one of the huggingface libraries, since mps support is still very beta.

AsAnAILanguageModel,

By MPS I mean “metal performance shaders”, it’s the backend that enables pytorch to use apple’s metal api to use apple silicon specific optimizations. I actually think it’s not unlikely that the issue is with pytorch. The mps support is still beta, and there was a bug that caused a lot of models to output gibberish when I used it. This bug was an open issue for a year and they only just fixed in a recent nightly release, which is why I even bothered to give this model a try.

That being said, I think one should generally be cautious about what to run their computers, so I appreciate that you started this discussion.

AsAnAILanguageModel,

Without mps it uses a lot more memory, because fp16 is not supported on the cpu backend. However, I tried it and noticed that there was an update pushed to the repository that split the model into several parts. It seems like I’m not getting any memory leaks now, even with mps as backend. Not sure why, but maybe it needs less RAM if the weights can be converted part by part. Time to test this model more I guess!

AsAnAILanguageModel,

I think they added full voiceover to divinity original sin in an update.

What is wrong with LLM benchmarks, and why are we still using them?

You are probably familiar with the long list of various benchmarks that new models are tested on and compared against. These benchmarks are supposedly designed to assess the model’s ability to perform in various aspects of language understanding, logical reasoning, information recall, and so on....

AsAnAILanguageModel,

I just started saving a list of prompts to test models with. It’s not exhaustive of course, but there are a few which help me cull new models quickly. Of course I can’t share them because I don’t want them to leak into training data. :)

AsAnAILanguageModel,

Very interesting! Did you test chatGPT as well for comparison?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • modclub
  • DreamBathrooms
  • InstantRegret
  • tacticalgear
  • magazineikmin
  • Youngstown
  • thenastyranch
  • mdbf
  • slotface
  • rosin
  • Leos
  • kavyap
  • Durango
  • normalnudes
  • ethstaker
  • everett
  • GTA5RPClips
  • cubers
  • khanakhh
  • ngwrru68w68
  • anitta
  • provamag3
  • cisconetworking
  • tester
  • osvaldo12
  • megavids
  • lostlight
  • All magazines