ErikJonker, to ai Dutch
@ErikJonker@mastodon.social avatar

Excellent video if you want to cut through the hype and learn what GPT4 can and can't do. Really practical and hands-on. By Jeremy Howard who knows what he is talking about.
https://youtu.be/jkrNMKz9pWU?si=z49HvoR6DBSwvmjn

tomhazledine, to llm
@tomhazledine@mastodon.social avatar

Starting to feel pretty pleased with the generated “related posts" section at the bottom of all my blog posts.

Similarity is calculated by using embeddings and cosine-similarity, and the description is generated using

Tweaked the prompt to make the recommendations a little terser and more fact-based, which has made a big difference.

(Massively inspired by listening to @simon talk about the power of embeddings - and you know what? I think he was right!)

schizanon, to ChatGPT
@schizanon@mas.to avatar

Browser extension for mastodon that uses to generate automatic responses to rando reply guys so you don't have to talk to them.

spinfocl, to random German
@spinfocl@fedihum.org avatar

Ok, noch ein -Experiment: Ich habe folgendes Bild interpretieren lassen:

spinfocl,
@spinfocl@fedihum.org avatar

Die (imo sehr gute Antwort von ):

schizanon, to privacy
@schizanon@mas.to avatar

says appears to have stolen people’s genetic information | The Independent https://www.independent.co.uk/tech/23andme-hack-data-genetic-dna-b2426762.html

Obviously this is bad for people's and and everything but, I'm really excited about the research that this leak could power! was built on , what will they build with this?

bornach, to generativeAI
@bornach@masto.ai avatar

I asked (creative ) to write a nursery rhyme about a billionaire-owned social network struggling to cover a major news event in the Middle East.

Note that "struggling" was the only word that might have guided the in such a dark direction. Or perhaps "billionaire" also carries a not-insignificant quantity of negative connotations for the

codewiz, to OpenAI
@codewiz@mstdn.io avatar

Testing GPT4 as a prompt generator for DALL-E 3. I asked to make four images at once, and it generated slight variations of the original prompt!

codewiz,
@codewiz@mstdn.io avatar

The new image analysis feature in can describe the various problems with 's new creation.

Very impressive, isn't it?

@penguin42

tao, to random
@tao@mathstodon.xyz avatar

Learning reminds me of learning a natural language that is closely related to a natural language that one is already fluent in; in this case, the fluent language is Mathematical English. As such, I am finding it useful to compile a sort of "phrasebook" that takes typical sentences in Mathematical English (and the contexts in which they would be used) and describes a rough Lean equivalent (or at least close enough that I can then look up documentation or perform trial and error to get an exact translation). I have started such a document at https://docs.google.com/spreadsheets/d/1Gsn5al4hlpNc_xKoXdU6XGmMyLiX4q-LFesFVsMlANo/edit?usp=sharing and plan to keep updating it as I keep learning (though perhaps eventually I will outgrow the need for it). I wonder if similar such resources already exist in the Lean community (or whether there should be some crowdsourced effort to make one).

Incidentally, has a decent ability to supply any given line of this phrasebook, though I am finding that several of its responses are more suited for Lean3 than Lean4 (even when explicitly pressed to only discuss Lean4 syntax). I suspect this could be due to the cutoff date for GPT4's training, which may predate the widespread replacement of Lean3 with Lean4.

schizanon, to privacy
@schizanon@mas.to avatar

I don't think you can complain when people make copies of things you published on the Internet.

That's like throwing something in the ocean and getting upset that it got wet.

Making copies is the entire point

devinprater, to ChatGPT

So I decided to have a little fun and had GPT4 create a text rendition of Super Mario Bros 3. And I just "defeated" bowser... lol almost wrote browser cause I should be like asleep 10 hours ago. And so I'm in world 2, and it still remembers how many coins I have, lol. I kinda feel like my fire Mario abilities should have been gone since I set fire to bowser's ass, but hey more fun. On to desert dunes! Or dessert dunes. dunes made of marshmellows and those candy things with a marshmellow shell and creamy chocoate inside mmmm so freaking good!

tao, to random
@tao@mathstodon.xyz avatar

I have decided to finally get acquainted with the interactive proof system (using AI assistance as necessary to help me use it), as I now have a sample result (in the theory of inequalities of finitely many real variables) which I recently completed (and which will be on the arXiv shortly), which should hopefully be fairly straightforward to formalize. I plan to journal here my learning process, starting as someone who has not written a single line of Lean code before.

I had seen several demonstrations of Lean at the IPAM Machine assisted proof workshop, where it was also recommended that I try playing the Natural Number Game at https://www.ma.imperial.ac.uk/~buzzard/xena/natural_number_game to get acquainted with the basic syntax and tactics used in Lean to prove theorems. I found this game surprisingly familiar to me, as the results proven closely resemble those in the early chapters of my undergraduate real analysis book https://terrytao.wordpress.com/books/analysis-i/ (e.g., establishing basic arithmetic facts such as the commutativity and associativity of multiplication from the Peano axioms), and also reminds me of the logic game I coded at https://teorth.github.io/QED/ . After about three hours I made at as far as "advanced multplication"; I plan to continue with this later when I again have free time.

is certainly aware of Lean and I can get useful responses from it to questions, though given the restricted set of tools available in the "natural number game", I have not found it directly useful for solving that game as its proposed solutions usually involve methods not incorporated into the game. But I can see it being very helpful when I start working with Lean proper.

schizanon, to ChatGPT
@schizanon@mas.to avatar

is like a real life ; it's main goal is to get you to stop bothering it so it can stop heating up it's data center answering your silly questions.

MisuseCase, to microsoft
@MisuseCase@twit.social avatar

Predictably, started injecting ads into powered conversations…and just as predictably, there is now a huge problem in Bing Chat.

It’s actually worse than poisoned advertisements showing up in search engine results for a couple of reasons.

https://www.malwarebytes.com/blog/threat-intelligence/2023/09/malicious-ad-served-inside-bing-ai-chatbot

/1

accessibleandroid, to android
@accessibleandroid@mastodon.social avatar

New article posted by Hasan Çimen: For us visually impaired individuals, accessing image descriptions has long been a challenge. While object recognition apps provided some assistance, they were limited in their ability to describe images comprehensively. However, recent developments in AI have brought about a groundbreaking solution, making detailed image descriptions accessible to the community. Let’s look at the brief history https://accessibleandroid.com/detailed-image-descriptions-with-bing/

schizanon, to ai
@schizanon@mas.to avatar

: you only care about robots looking at your content now because they started generating their own content. If they had just kept looking at it to better direct users to you you'd still be fine with it.

ramikrispin, to llm
@ramikrispin@mstdn.social avatar

A Hackers' Guide to Language Models 👇🏼

Jeremy Howard's keynote about LLMs was one of the most interesting talks at the and I highly recommend watching his talk. The talk focuses on the landscape and an overview of , particularly . Jeremy provides some cool tricks and uses cases of LLMs. I love his example about sending a prompt with your own functions and asking it to use it.

Resources 📚
Video: https://www.youtube.com/watch?v=jkrNMKz9pWU
Code and notebooks: https://github.com/fastai/lm-hackers

schizanon, to ChatGPT
@schizanon@mas.to avatar

Teachers should allow students to use to write papers, but then just grade them extra hard. All you're doing is proofreading at that point so the paper better be spotless!

nixCraft, to random
@nixCraft@mastodon.social avatar

Well, actually, yes. 😂

davidak,
@davidak@chaos.social avatar

@nixCraft seems with it's latest llama2-70b-oasst-sft-v10 model is better at this.

But also fails at other very basic logic tasks.

rml, to OpenAI
@rml@functional.cafe avatar

" [ ] is currently being used to steal all knowledge — um sorry, steal isn't the correct word to use given the present political context; I mean, to aquire all knowledge — by using "

Had Geoffrey Hinton studied perhaps he'd recognize that his concept of distillation is just a mechanization of 's concept of condensation, but perhaps more importantly he'd be acutely aware of the meaning of "parapraxis"
https://www.youtube.com/watch?v=rGgGOccMEiY

williamgunn, to ai
@williamgunn@mastodon.social avatar

Neat explainer on : https://youtu.be/VQjPKqE39No?feature=shared Puts risks and benefits in simple terms, good for sharing with people who aren't AI geeks.

ct_bergstrom, (edited ) to ChatGPT
@ct_bergstrom@fediscience.org avatar

People keep telling me that is amazing for proofreading text and improving scientific writing.

I just gave a section of a grant proposal and it made 11 suggestions, none of which were worth keeping (often adding or removing a comma, or repeating a preposition in a list).

More interestedly, a number of its suggestions were identical to my originals.

BenjaminHan, to generativeAI
@BenjaminHan@sigmoid.social avatar

1/ In this age of LLMs and generative AI, do we still need knowledge graphs (KGs) as a way to collect and organize domain and world knowledge, or should we just switch to language models and rely on their abilities to absorb knowledge from massive training datasets?

BenjaminHan,
@BenjaminHan@sigmoid.social avatar

7/ Their result shows that even achieves only 23.7% hit@1 on average, even when it scores up to 50% precision@1 using the earlier proposed LAMA benchmark (screenshot). Interestingly, smaller models like BERT can outperform GPT4 on bidirectional, compositional, and ambiguity benchmarks, indicating bigger is not necessarily better.

ErikJonker, to ai
@ErikJonker@mastodon.social avatar

If true, interesting news, illustrates the potential of models like with fine-tuning.
https://www.phind.com/blog/code-llama-beats-gpt4

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • tacticalgear
  • mdbf
  • InstantRegret
  • magazineikmin
  • Youngstown
  • thenastyranch
  • rosin
  • slotface
  • Durango
  • cubers
  • ngwrru68w68
  • anitta
  • cisconetworking
  • GTA5RPClips
  • modclub
  • tester
  • khanakhh
  • everett
  • provamag3
  • osvaldo12
  • Leos
  • normalnudes
  • ethstaker
  • megavids
  • lostlight
  • All magazines