ct_bergstrom, to random

One of the decisive moments in my understanding of and their limitations was when, last autumn, @emilymbender walked me through her Thai Library thought experiment.

She's now written it up as a Medium post, and you can read it here. The value comes from really pondering the question she poses, so take the time to think about it. What would YOU do in the situation she outlines?

https://medium.com/@emilymenonbender/thought-experiment-in-the-national-library-of-thailand-f2bf761a8a83

micron,
@micron@mastodon.social avatar

@ct_bergstrom @emilymbender No, I read it.
From your writing I think I know that you also see the importance of the mis- disinformation aspect.
My concern is that the focus on the what it "is or is not" lets the public easily dismiss the whole debate around as "academic" or on the other end sensationalise it.

rysiek, (edited ) to random
@rysiek@mstdn.social avatar

When Conquistadors stole Aztec gold — which largely came in the form of cultural artifacts or religious items — they often melted it down into simple, crude even, gold bars for easier transport:
https://www.smithsonianmag.com/smart-news/gold-bar-once-belonged-aztec-emperor-moctezuma-180973959/

After all, they were not interested in the cultural and religious significance of these items. They just wanted the gold, which they could easily sell to others.

By reducing it to the raw material they destroyed the cultural and historical context. We are all poorer for it.

🧵/1

rysiek, (edited )
@rysiek@mstdn.social avatar

When techbros train their on people's works, these works are similarly removed from their individual context, "melted down" into the raw material — training data. Just disembodied words and phrases and sentences for the model to parrot later.

Historical and cultural and social context of these works is destroyed, melted away.

And it also happens without consent.
And it also happens on a gigantic scale.
And it also happens because techbros can sell the output.

🧵/3

futurebird, to random
@futurebird@sauropods.win avatar

Is there anyone serious who is saying this? Or is this just another way to make the tech seem more powerful than it is?

I don't get this "we're all gonna die" thing at all.

I do get the "we are too disorganized and greedy to integrate new technology well without the economy getting screwed up and people suffering... but that's another matter..."

hobs,
@hobs@mstdn.social avatar

@msh
Not true. All the say otherwise. You have to look past the hyped to the bread and butter BERT and BART models, but the trend is undeniable:

https://paperswithcode.com/area/natural-language-processing

You name an NLP problem and there's an LLM that is now better at it than the average human. Not so 2 yrs ago. Times they are a change'n.
@futurebird

mariyadelano, to ai
@mariyadelano@hachyderm.io avatar

I got early access to Google's new powered search experience, and I wrote about my first impressions here: https://kalynamarketing.com/blog/google-sge-review

Main thoughts:
I'm impressed. Google created a refreshing implementation of generative AI to search.

SGE never made me feel like it was trying to be more than a search engine, or force a clunky chatbot dynamic on me.

I don't know how practical it will be, but I am pleasantly surprised so far

mariyadelano,
@mariyadelano@hachyderm.io avatar

Looking at other people’s initial impressions of Google’s new AI search beta, and I feel like a lot of the discourse is kind of missing the point?

Yes, all the usual questions about and are still important. But what makes Google’s experiment in particular interesting is that it’s NOT a chatbot.

It’s a different UX for generative AI. And its an experience that is grounded in search versus Bing which felt like it was just a search-tinted coat of paint on top of ChatGPT

jeffjarvis, (edited ) to random
@jeffjarvis@mastodon.social avatar

This, via @emilymbender, is so good: a paper opposing " gratuitous anthropomorphic features." I've been arguing that LLMs should not use first-person human but third-person* machine and not use brain verbs but machine verbs (i.e., instead of "I write," "the program assembled).

  • Corrected. I had said first-person machine but I stand corrected by the expert, linguist Dr. Bender.

https://arxiv.org/pdf/2305.09800.pdf

teixi,
@teixi@mastodon.social avatar

@emilymbender @jeffjarvis

Precious @emilymbender skills in just 4 min:

1
2
3
4
5

fav bit:

» if it makes sense,
it is because you made sense of it «

https://youtu.be/NIqgr3AF3VE

HT via @WritingItReal

janriemer, to ai

You don't need to transform .then method chains to async/await syntax. 🙄

Most LSPs can do that with 100% correctness and not probabilistically like .

janriemer,

Another crucial aspect in this matter of rewriting your code:
You should make your tools reusable, so that others can benefit from it as well. This is not possible with . Sure you can share the prompt, but the output is all wishy-washy.

Use a proper tool for this kind of task, e.g. ast-grep - ⚡ A fast and polyglot tool for code searching, linting, rewriting at large scale. Written in :

https://ast-grep.github.io/

2ndStar, to gamedev German
@2ndStar@astronomy.social avatar

deleted_by_author

  • Loading...
  • HistoPol,
    @HistoPol@mastodon.social avatar

    @2ndStar

    Da habe ich bei dir weniger Sorgen. Es ist auch wohl die einzige Möglichkeit, noch etwas zu tun. :)

    Ich befürchte jedoch, als Gesellschaft haben wir da schon versagt. Es hätte nie soweit kommen dürfen, ohne dass wir als Gesellschaft auf / und vorbereitet sind.

    https://mastodon.social/@HistoPol/109894787077782438

    bwaber, to random
    @bwaber@hci.social avatar

    I had a pretty busy day, but at least I was able to go for a nice walk and listen to some nice talks for my while I was waiting for my car at the garage! (1/7)

    A wide stream going through a swamp with small trees on either side

    bwaber,
    @bwaber@hci.social avatar

    First was a fantastic panel on using for science at the Alan Turing Institute with @abebab, Atoosa Kasirzadeh, and @SandraWachter. The panelists are incisive and withering in their criticism of blindly applying LLMs to fields where the truth matters, as well as the tendency of industry and academia to center benefits rather than harms. Highly recommend https://www.youtube.com/watch?v=FgoT1Jygf1k (2/7)

    bwaber, to random
    @bwaber@hci.social avatar

    Well today was a much better day, and I was able to get out and enjoy the weather and listen to great talks for my ! (1/8)

    A raised wooden boardwalk through a swamp with dense foliage
    A sunlit brook with greenery on each side. A log spans the brook

    bwaber,
    @bwaber@hci.social avatar

    Next was an excellent talk by @diyiyang on socially responsible at the Cyber Policy center. This is a great overview of the various problems with and also how approaches can be adapted to make systems more culturally aware. You should definitely skip the Q&A here though https://www.youtube.com/watch?v=PrVWEdVfvIQ (7/8)

    williamgunn, to random
    @williamgunn@mastodon.social avatar

    If it's true that LLM-generated content is going to kill user-generated content platforms because it's DDOSing their moderation systems, one would expect to see a rise in consumption of higher production value content. Good news for streaming platforms, I guess?

    williamgunn,
    @williamgunn@mastodon.social avatar

    LLM-generated content doesn't have to DDOS a moderation system. Moderators are pretty good at distinguishing generated content from other stuff. My accuracy rate was about 80% for the most problematic stuff, the stuff that's churned out en masse to scam revenue sharing programs. The problem is that it's hard to say exactly how you know something is generated, and the people who run moderation systems think moderation systems have to be perfectly systematic in order to be fair.

    williamgunn,
    @williamgunn@mastodon.social avatar

    It is, in fact, impossible to be perfectly systematic, but for about 70-80% of the population, this is really hard to understand. So the only route for survival for user-generated content platforms, meta-systematic moderation (needs a whole post, but basically doing what needs doing in the gaps between explicit rules), is going to be something that seems unfair and arbitrary to most people. I don't see how they get past that.

    absamma, to ai

    Nvidia is now a trillion dollar company thanks to the demand glut for their chips to train LLMs. I still wonder about the unit economics but hopefully it'll go down significantly with time.

    https://www.bbc.co.uk/news/business-65757812

    schizanon, to ai

    The same people who were building nothing with , and are now building nothing with , , and tomorrow they will still be building nothing.

    RE: https://mastodon.social/users/flargh/statuses/110452339151754728

    toxomat, to ChatGPT

    Nochal wegen : Ich habe mal ernsthaft versucht, mir einen ~500 Worte Abstract für einen Vortrag schreiben zu lassen. Oder dabei Hilfe zu bekommen. Ernsthaft am Prompt geschraubt, 6 Iterationen. Resultat: Nur, wirklich nur Bullshit. Reiteration von Definitionen und Trivia aus dem Umfeld des Themas. Die Hypothese "Das Ding ist für akademische Zwecke komplett nutzlos, weil es Ideen nicht verstehen kann" konnte wieder nicht widerlegt werden.

    williamgunn, to ai
    @williamgunn@mastodon.social avatar

    "LLMs can't do anything in the real world, so all these doomsday scenarios are just silly, they'll never..." https://sanctuary.ai/resources/news/sanctuary-ai-unveils-phoenix-a-humanoid-general-purpose-robot-designed-for-work/

    williamgunn, to ai
    @williamgunn@mastodon.social avatar

    Most people believe there's such a thing as good taste. It's highly relative, but there are things which are generally agreed to be in bad taste. At Google, Facebook, Reddit, etc (user-generated content), a large portion of revenue is generated via things that are in bad taste, so people who work there tend to have a very blasé attitude towards taste. The fact that produce stuff in poor taste is just not something they care about, and this is a key way to differentiate a community.

    nighthawk, to random
    @nighthawk@aus.social avatar

    With around the corner, I've cleared my deck and wrote up my thoughts on Microsoft's Build conference. Interesting guidance for adding to apps and the upcoming tools in Azure do look like promising building blocks.

    https://adrian.schoenig.me/blog/2023/06/04/microsoft-build-for-app-developers/

    remixtures, to ai Portuguese
    @remixtures@tldr.nettime.org avatar

    : "“The machines we have now, they’re not conscious,” he says. “When one person teaches another person, that is an interaction between consciousnesses.” Meanwhile, AI models are trained by toggling so-called “weights” or the strength of connections between different variables in the model, in order to get a desired output. “It would be a real mistake to think that when you’re teaching a child, all you are doing is adjusting the weights in a network.”

    Chiang’s main objection, a writerly one, is with the words we choose to describe all this. Anthropomorphic language such as “learn”, “understand”, “know” and personal pronouns such as “I” that AI engineers and journalists project on to chatbots such as ChatGPT create an illusion. This hasty shorthand pushes all of us, he says — even those intimately familiar with how these systems work — towards seeing sparks of sentience in AI tools, where there are none."

    https://www.ft.com/content/c1f6d948-3dde-405f-924c-09cc0dcf8c84

    illumniscate, to ai

    Alright, there is much over in . While I embrace what offer, why are so many people going mad over the topic? I mean, I'm not a grandmaster in the field (I've been doing it for 15 yrs "only") & am pro , but what's wrong with using less tech?

    Tech should be integrated to a degree & definitely taught about, but beyond that it will mostly get in the way. Effective has less to do with tech & all to do with .

    @edutooter

    williamgunn, to ai
    @williamgunn@mastodon.social avatar

    CEOs of all the companies right now.

    fj, to random
    @fj@mastodon.social avatar

    “LLMs not only fail to properly generate correct Python code when default function names are swapped, but some of them even become more confident in their incorrect predictions as the model size increases, an instance of the recently discovered phenomenon of Inverse Scaling, which runs contrary to the commonly observed trend of increasing prediction quality with increasing model size”
    https://arxiv.org/abs/2305.15507

    hobs,
    @hobs@mstdn.social avatar

    @fj
    As Rob Miles says, it's a sycophant. It's doing exactly what it was trained to do, maximize likes. https://yewtu.be/watch?v=w65p_IIp6JY



    kellogh, to ChatGPT
    @kellogh@hachyderm.io avatar

    $20/month isn't a ton of money, but if you use a free UI with the API, you can get away with paying less than $5/month for with relatively heavy usage and it's a lot more stable than the free version

    Jigsaw_You, (edited ) to ai Dutch
    @Jigsaw_You@mastodon.nl avatar

    Spot-on article on the threat of powered .

    “Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us.” - Daniel C. Dennett

    https://www.theatlantic.com/technology/archive/2023/05/problem-counterfeit-people/674075/

    reedmideke, to random
    @reedmideke@mastodon.social avatar

    On that CNET thing in the last boost, my first thought was "this is gonna make search even more useless" and… yeeeep "They are clearly optimized to take advantage of Google’s search algorithms, and to end up at the top of peoples’ results pages"

    https://gizmodo.com/cnet-ai-chatgpt-news-robot-1849996151

    reedmideke,
    @reedmideke@mastodon.social avatar

    @willoremus digs into the compute cost of and boy does that not look like good news for all the startups cramming into everything (gift link) https://wapo.st/3WTCK8Q

    Jigsaw_You, (edited ) to ai Dutch
    @Jigsaw_You@mastodon.nl avatar

    Very interesting study on the impact of bias in leading to biased thinking.

    “It turned out that anyone who received assistance was twice as likely to go with the bias built into the , even if their initial opinion had been different.”

    @machinelearning

    https://arstechnica.com/science/2023/05/ai-writing-assistants-can-cause-biased-thinking-in-their-users/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • InstantRegret
  • mdbf
  • ethstaker
  • magazineikmin
  • GTA5RPClips
  • rosin
  • thenastyranch
  • Youngstown
  • osvaldo12
  • slotface
  • khanakhh
  • kavyap
  • DreamBathrooms
  • JUstTest
  • Durango
  • everett
  • cisconetworking
  • Leos
  • normalnudes
  • cubers
  • modclub
  • ngwrru68w68
  • tacticalgear
  • megavids
  • anitta
  • tester
  • lostlight
  • All magazines