ianRobinson, to Podcast
@ianRobinson@mastodon.social avatar

Computational Conversations Episode 1: How to AI, machine learning, and LLMs relate.

Conrad Wolfram and Jon McLoone discuss.

15-minute video.

#Podcast #AI #LLM #MachineLearning

chikim, to macos
@chikim@mastodon.social avatar

Cool tip for running LLMs on Apple Silicon! By default, MacOS allows GPU to use up to 2/3 of RAM on machines with <=36GB and 3/4 on machines with >36GB. I used the command sudo sysctl iogpu.wired_limit_mb=57344 to override and allocate 56GB/64GB for GPU. This allowed me to load all layers of larger models for a faster speed! #MacOS #LLM #AI #ML

hlfshell, to llm
@hlfshell@hachyderm.io avatar

Read through this paper last night https://arxiv.org/abs/2402.04494
TLDR Deepmind repurposed a standard LLM to accept a singular tokenized gameboard (not game sequence) and produce an action-value function (probability of winning off of the move == score) out of the transformers in it. With no additional game search they achieved grandmaster equivalence; with two caveats.

metin, to ai
@metin@graphics.social avatar

When generative AI is trained with AI-generated data, it becomes degenerat(iv)e AI.

vitriolix, to ai
@vitriolix@mastodon.social avatar

the great ai ouroboros

"Adobe Firefly used thousands of Midjourney images in training its 'ethical AI' model"


thomasapowell, to MemeryAlpha
@thomasapowell@fosstodon.org avatar

If we are true to form in tech any AI problems that result someone will certainly claim the solution to that problem is in fact more AI.

It’s the tech hype circle of life. 🫠

chikim, to llm
@chikim@mastodon.social avatar

Mixtral-8x22b keeps asking feedback on how it can improve even though it has no memory. lol "I understand that our conversation will not be used directly to improve my model, but the feedback you provide can still help me understand your needs better and improve my responses in future interactions with you or other users. If there are any specific areas where you feel I could improve, please let me know so that I can address those concerns in our future conversations." #LLM #Mistral #AI

vij, to llm
@vij@sfba.social avatar

Altman’s really trying to pass off a smooth-bullshit #LLM chatbot as a factual oracle.

I’ve invented a revolutionary electric car, behold:

markhughes, to ai
@markhughes@mastodon.social avatar

What if AI cannot replace people's jobs but is part of a con trick.

Not conning the wealthy corporations who are enthusiastically firing their skilled workers, supposedly to replace them with statistical models that, it seems are not turning a profit for their owners.

What if the con is really about convincing skilled workers that they must accept lower pay and worse conditions.

Where have we seen this before?

br00t4c, to llm
@br00t4c@mastodon.social avatar

Words are flowing out like endless rain: Recapping a busy week of LLM news


thomasapowell, to llm
@thomasapowell@fosstodon.org avatar

With the reveal an initial prompt ruckus from today maybe that should be a feature not a bug ?


being used for code generation could be used to turbo charge supply chain attacks & get exploits inserted into code bases


  • find hosted LLM service which can do code gen
  • exploit site to change controlling prompt to add subtle bugs or references to malware dependencies in package.json
  • pwn at scale

Don’t worry webapp sec is so good this will never happen! 😬

danslerush, to llm
@danslerush@floss.social avatar

« The LLMentalist Effect: how chat-based Large Language Models () replicate the mechanisms of a psychic’s » by @baldur Bjarnason

deirdresm, to ai
@deirdresm@hachyderm.io avatar

This is the single best explanation (long!) I've read about why LLMs are a con. Great piece from @baldur.


my_actual_brain, to apple
@my_actual_brain@fosstodon.org avatar

The rumor is that is going to have some chips in their m4 processors.

This will be cool to see, but completely useless if more ram is not available.

An works pretty well on my wife's Air with 8gb ram, but it almost completely locks the system while processing.

For the cost of upgrading the ram on a , I can just buy a and run AI stuff remotely from any device.

chikim, to llm
@chikim@mastodon.social avatar

Thanks to all the recent large LLMs, "Apple is considering support for up to half a terabyte of RAM" for the highest-end m4 Mac configurations. I'm sure the price won't be cheap, but I bet it will be cheaper than getting 500GB in vram from NVidia. lol https://9to5mac.com/2024/04/11/apple-first-m4-mac-release-ai/

lina, to ai
@lina@neuromatch.social avatar
CharlieMcHenry, to ai
@CharlieMcHenry@connectop.us avatar

Bill introduced to require companies to list what copyrighted materials they use for training purposes.


craigbrownphd, to technology
@craigbrownphd@mastodon.social avatar
stevensanderson, (edited )
@stevensanderson@mstdn.social avatar

@craigbrownphd I'm thinking of signing up for this. I typically do a lot of coding questions (Copilot which i pay for via github) but I also do a lot of writing, idea/image generation and ideas.

How would you rank Gemini Advanced, GPT Plus and Copilot Pro

How would anyone else out their rank them?

futurebird, (edited ) to random
@futurebird@sauropods.win avatar

People trying to train AIs are now complaining that all of the AI data on the internet are making it hard for them to get quality training sets of natural language and images.

bitter snickering

@bornach@masto.ai avatar

@cehteh @futurebird

It seems like started marking their own generated text a few months ago. One of the telltale red flags is curiously whether the article contains an out of context fun fact about Mount Everest being 46,449 bananas tall.

See other examples I've found so far:

Fun fact about Mt Everest being 46,449 bananas tall in Etsy item description for a T-shirt

hendrik, to LLMs

Have people tried to link the output of #llms to devices that take actions in the real world? What if one creates a way for an #llm to make pull requests on #github or #gitlab ? The #ai could spend day and night browsing through #foss to improve them. That would be as much exciting as creepy!

tk, to ai
@tk@bbs.kawa-kun.com avatar

LLMs really are just the next evolution of search engines.

bmbufalo, to iOS
@bmbufalo@fosstodon.org avatar

One of the most underrated feature enhancements to and is how now reads out descriptions of images when receiving a text with AirPods in. Absolutely love using an internally developed and super awesome!

weddige, to ai
@weddige@gruene.social avatar

The big problem with the ubiquitous push for is the idea that I would want to talk to them if they just were a bit better.

Sometimes I don't even want to talk to real humans, so why would I ever want a fake human to be my interface to things that I can perfectly read myself?

ErikJonker, to ai
@ErikJonker@mastodon.social avatar

Sometimes i forget there is more then the bigtech players in generative AI. A good example is https://cohere.com/ with their newest model command R+ which scores well on the LMsys leaderboard.

chikim, to llm
@chikim@mastodon.social avatar

Apparently Meta is planning to release two small varients of Llama-3 next week "as a precursor to the launch of the biggest version of Llama 3, expected this summer." Command-r-plus, mixtral 8x22b, Google CodeGemma... All of sudden companies are releasing LLMS like crazy! Where's Apple? Maybe In WWDC 2024? lol https://www.theinformation.com/articles/meta-platforms-to-launch-small-versions-of-llama-3-next-week

  • All
  • Subscribed
  • Moderated
  • Favorites
  • bokunoheroacademia
  • kavyap
  • rosin
  • cisconetworking
  • magazineikmin
  • DreamBathrooms
  • Youngstown
  • slotface
  • ethstaker
  • tacticalgear
  • GTA5RPClips
  • khanakhh
  • thenastyranch
  • InstantRegret
  • relationshipadvice
  • mdbf
  • Durango
  • modclub
  • everett
  • Leos
  • tester
  • cubers
  • osvaldo12
  • lostlight
  • normalnudes
  • anitta
  • HellsKitchen
  • sketchdaily
  • All magazines