bwaber, to random
@bwaber@hci.social avatar

Another looong day! But luckily my dog hung out with me outside between calls and I was able to listen to some talks for my ! (1/7)

bwaber,
@bwaber@hci.social avatar

Next was an interesting talk by @Prakharg on enhancing reliability at the @allen_ai. This talk gets into using natural language as an explicit tool for designing dialogue behavior, and while I'm not sure if this is the best interface for LLMs I could see this being used for explanation in a variety of contexts https://www.youtube.com/watch?v=KBGYXxQFgmY (3/7)

chrisoffner3d, to random

No mention of “AI” by Apple.

They talked about “neural networks,” “machine learning,” mentioned “encoder-decoder” architectures and “Transformers” for language prediction but did not join the AI circus. I respect that.

https://www.theverge.com/23749697/apple-wwdc-ai-machine-learning-new-features-chatbots-chatgpt

chrisoffner3d,

Tim Cook: “What people are now talking about are these Large Language Models. I think they have great promise. I do think that it’s so important to be very deliberate and very thoughtful in the development and the deployment of these because they can be so powerful that you worry about things like bias, things like misinformation, maybe worse in some cases.”

https://youtu.be/YmOC7dK3rc0

futurebird, to random
@futurebird@sauropods.win avatar

Is there anyone serious who is saying this? Or is this just another way to make the tech seem more powerful than it is?

I don't get this "we're all gonna die" thing at all.

I do get the "we are too disorganized and greedy to integrate new technology well without the economy getting screwed up and people suffering... but that's another matter..."

mkarliner, (edited )
@mkarliner@mastodon.modern-industry.com avatar

@misc @futurebird

It does occur to me that the smartest people on the planet are rarely the richest or the most powerful. Quite the reverse in fact.

I would imagine the same issues would face a general AI.

I think the best strategy for world domination is not to develop super human intelligence, but to be dumb and lie consistently and plausibly. The LLM's are well advanced in that.

Of course, the very best strategy for vast riches is to be both dumb and lucky.

interfluidity, to random

a thing i don’t get is what is new. i mean, computers have long been much, much “smarter” than humans in, for example, their ability to perform arithmetic, or to remember things. recent AI tools are interesting for sure, but what superior competence of theirs makes these new systems so threatening, compared to older superior competences?

markhughes,
@markhughes@mastodon.social avatar

@interfluidity hype and human 'hallucination' is what makes this seem a much bigger step than it is.

It's the first time computers have seemed human in how we interact.

People are projecting human qualities into them because they don't understand what LLMs actually are. So people make invalid assumptions, and VCs and corporations are exploiting this, deliberately anthropomorphizing 's with terms like "hallucinate" instead of "make false statements".

Of course journalists amplify this. 🤷‍♂️

daringfireball, to random
@daringfireball@mastodon.social avatar

Ted Chiang on AI: ‘The Machines We Have Now Are Not Conscious’
https://daringfireball.net/linked/2023/06/04/chiang-ai

mjgardner, (edited )
@mjgardner@social.sdf.org avatar

@daringfireball @gruber If you must use a metaphor of actual consciousness, I like Emily Bender’s “stochastic parrot.” https://dl.acm.org/doi/10.1145/3442188.3445922

” “

amberage, to random
@amberage@eldritch.cafe avatar

Things ​s are great for:

  • generating character names for extras
  • generating writing prompts to flex your creative muscles when you're stuck
  • generating story titles following genre conventions to get you out of writer's block
  • filling in blanks with placeholder stuff

things LLMs aren't great for:

  • literally any decision making
  • unsupervised operation of important stuff
  • actually generating writing that will see the light of day
  • research
  • accountability

LLM stands for Large Language Model, the type of autocomplete-style generative algorithms such as ChatGPT often incorrectly referred to as "AI" or "artificial intelligence"

amberage,
@amberage@eldritch.cafe avatar

Anyway, back on topic: simple things, like naming characters and places, are fairly minor decisions and can easily be checked against existing IP by a quick Google search (no Theodore Pendlewick exists in fiction, but several companies named Arcane Agenda exist in reality). The side-effects of ​s, like emulating its training data to the point of copyright infringement, can be easily kept in check at that scale, and retracing how exactly it arrived at the output is largely irrelevant.

But when you generate entire audience-facing content that way, you can't check for infringement anymore, and when you let it make major decisions, you can't hold it accountable – neither for malpractice, nor for reinforcing biases contained in its training data.

I don't think anyone is saying every and all use of such models is bad (although there is an argument to be made for shutting the lid on Pandora's box because harmful use outweighs harmless use), but you have to be aware of its limits, and the way it works (again, it's basically like your phone's predictive keyboard, just trained on way more data, none of which is yours) before you can safely and responsibly employ it.

amberage,
@amberage@eldritch.cafe avatar

Oh and by the way, : if you want to tag a hashtag but need to append a plural s without altering the tag, as I did with ​s above: if your client supports inputting HTML entities (as glitch-soc does, in both Markdown and HTML modes), you can use a ​ entity between the hashtag name and the s.

MattHodges, to ai

Lot of conversation around the political risks of . Lots talk about disinfo. Some around data privacy. But I think the biggest political risk right now is these tools convincing us to do visibly stupid things. That professor who accused his class of cheating — that lawyer who filed fake citations — these are smart people! Political staff need to get hands-on familiarity with these tools to understand the benefits and limits, or else we too are prime for something consequentially embarrassing.

MattHodges,

I really would like to avoid the situation where some staffer asks some to dig up an old quote or legislative record or financial history or other oppo-research and they run with a bunch of campaign comms on a thing that never happened.

thomasfuchs, to random
@thomasfuchs@hachyderm.io avatar

The “AI” we have can’t think and doesn’t have concepts of anything and doesn’t know anything.

Therefore it can’t “attack its enemies”, for the AI there is no such thing as enemies, it literally anything.

All it is is generated formulas based on statistical analysis.

This works great for some use cases, and really not at all for others.

Definitely it will not try to kill you or anyone else. It doesn’t know or understand that there is a you.

Pampa,

@thomasfuchs

The threat is far more profound and insidious than you imagine.

Watch Yuval Noah Harari's talk here and ponder.

https://www.youtube.com/watch?v=LWiM-LuRe6w&pp=ygUUeXV2YWwgbm9haCBoYXJhcmkgYWk%3D

tero, to random
@tero@rukii.net avatar

Most May Be , Not Life as We Know It -

"Human intelligence may be just a brief phase before machines take over. That may answer where the aliens are hiding."

https://www.scientificamerican.com/article/most-aliens-may-be-artificial-intelligence-not-life-as-we-know-it/

HistoPol,
@HistoPol@mastodon.social avatar

@voron @tero @xtaldave

(3/n)

...source community's democratization approach of 's now outpacing even -AI's development leaps. (See first link) This is the evolutionary booster predicted at the end of the last century as "humanity has evolved into a socio-technological complex."

" In other words,...

yakky, to programming
@yakky@mastodon.social avatar

I just got the weirdest CV cover letter ever for a developer role, mixing Adrian Newey (the Adrian Newey of F1 fame), Hitler (that Hitler) and epigenetics (wtf epigenetics has to do with software engineering??). Quickest rejection ever

cazabon,

@yakky

ChatGPT is branching out.

Apparently hiring managers are gettings tons of these -generated CVs and cover letters for any position that can possibly be done remotely.

lauren, to random
@lauren@mastodon.laurenweinstein.org avatar

We shouldn't be worrying about AI wiping out humanity. That's a smokescreen. That's sci-fi. We need to worry about the individuals now and in the near future who can be hurt by the premature deployment of generative AI systems that spew wrong answers and lies, and then when asked for confirmation, lie about their own lies! And just popping up warnings to users is useless, because you know and I know that hardly anyone will read those warnings or pay any attention to them whatsoever.

cazabon,

@lauren

The warnings are also cold comfort to the loved ones of the guy the vigilante gang beat to death because some generated content somewhere labelled him an unrepentant pedophile in their neighbourhood.

I don't know what the solution is. Mandate that an indicator of its origin be prominently displayed within the generated text? How could that work?

uzayran, to random
@uzayran@cyberplace.social avatar
cpi,
@cpi@mastodon.social avatar

@uzayran we need @web3isgreat but for techbros failing to replace humans with

arstechnica, to random
@arstechnica@mastodon.social avatar

Lawyer cited 6 fake cases made up by ChatGPT; judge calls it “unprecedented”

Judge weighs punishment for lawyer who didn't bother to verify ChatGPT output.

https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

internic,
@internic@qoto.org avatar

@arstechnica '... he asked the AI tool whether Varghese is a real case. ChatGPT answered that it "is a real case" ... When asked if the other cases provided by ChatGPT are fake, it answered, "No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw." '

If accurate, to me this illustrates the core of the issue with many people's use of so-called tools: this lawyer bought the hype that these are verging on human-like intelligence and could understand and answer his question, while in reality they were only constructing a statistically probable sequence of words that might be written in response to such a query, with no relationship to the facts of the matter.

lauren, to random
@lauren@mastodon.laurenweinstein.org avatar

All these years later, and people are still terrified of HAL.

cazabon,

@lauren

We need a ChatGPT-type LLM with a conversational mode, feeding a text-to-speech engine with Douglas Rain's voice.

Maybe if people heard "AI" bullshit with Hal's voice, they'd be a little more skeptical about its content ?

judell, to random
@judell@social.coop avatar

I'm downloading Descript as I write this. My first intro to text as the interface to audio editing (such a profound and wonderful idea!) was courtesy of Laurian Gridinoc years ago, I've been eagerly awaiting more general availability.

https://cogdogblog.com/2023/05/changing-up-decripting-my-podcast-methods-eh-ai-eh/

See also https://blog.jonudell.net/2022/09/25/curating-the-studs-terkel-archive/ about remixing the Studs Terkel archive using Laurian's https://hyper.audio/. I wonder if Descript uses the same or a different foundation?

Roundtrip,
@Roundtrip@federate.social avatar

@judell @cogdog

I enjoyed the podcast and your conversation on Descript for synchronized text+audio editing.

Reading the very clean transcript including a few ‘disfluencies’ — like ‘um’ — made me think of this ‘About Language’ podcast: http://becauselanguage.com/76-ooo-yum-uh/

TLDR ‘Um’s are helpful markers to signal pauses while thinking. Host Daniel Midgley decided to leave them in transcripts to keep a natural tone. Some ’s pick this up as well.

randomgeek, to random

I'm trying. I'm really trying to give Copilot a chance.

So far it's been suggesting reasonable Python code suggestions. Not great, but reasonable.

Too bad I was asking it about Raku and Nushell.

mjgardner,
@mjgardner@social.sdf.org avatar
annaleen, to random
@annaleen@wandering.shop avatar

"It’s increasingly looking like this may be one of the most hilariously inappropriate applications of AI that we’ve seen yet." I am riveted by the extensive documentation of how ChatGPT-powered Bing is now completely unhinged. @simon has chronicled it beautifully here: https://simonwillison.net/2023/Feb/15/bing/

HistoPol,
@HistoPol@mastodon.social avatar

@simon @heiseonline

(2/2)

...operator."
This is a fiasco the colonel caused.

Some times, the original story pans out.

Besides, now that an can be set up on a gaming for as little as 100 USD, and gamers being what they are, I'm sure we will not have to wait another year for corroboration, alas.

https://www.businessinsider.com/ai-powered-drone-tried-killing-its-operator-in-military-simulation-2023-6

jbzfn, to random
@jbzfn@mastodon.social avatar

"If you google around enough, you’ll find people talking about using LangChain to chain together LLM calls and get better outputs. However, chaining calls to an LLM just makes the latency problem with LLMs worse, which is a nonstarter for us. But even if it wasn’t, we have the potential to get bitten by compound probabilities."
@honeycombio


https://www.honeycomb.io/blog/hard-stuff-nobody-talks-about-llm

erinmikail, to random
@erinmikail@mastodon.social avatar

IMPORTANT ANNOUNCEMENTS FROM MY DAY JOB!

1.8.0 is now LIVE! 🚀

Optimized for fine-tuning datasets for LLMs and Foundational Models.✨

Check it out on the blog:

https://labelstud.io/blog/introducing-label-studio-1-8-0/

gimulnautti, to programming
@gimulnautti@mastodon.green avatar

In my opininion here is the real power of current-level ’s: Declarative via close-to-natural language type prompts.

It’s not just productivity, so many more people can get into programming now! No need to understand loops, branches, conditions! It’s all doable by natural language.

You only need to dabble a bit in states and data storage structures. Just wonderful!

https://medium.com/javascript-scene/sudolang-a-powerful-pseudocode-programming-language-for-llms-d64d42aa719b

schizanon, to ai

Y'all should be asking yourselves why all these rich fucks want us to be worried about when there are much more real and present threats to humanity they could be flipping out about.

peterbutler, to ai
@peterbutler@mas.to avatar
timswan, to ai
@timswan@mastodon.social avatar

It’s hard to believe the marketing momentum behind when auto-correct is so fucking bad, and Instagram pushes bullshit at me if I pause for a second on a post. No, Instagram, my mistaken like on a post doesn’t mean that’s all I want to see for the next week.

kjr, to ai

Sounds interesting
Large Language Models in Molecular Biology
Deciphering the language of biology, from DNA to cells to human health


https://towardsdatascience.com/large-language-models-in-molecular-biology-9eb6b65d8a30

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • kavyap
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • cubers
  • GTA5RPClips
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • tacticalgear
  • ethstaker
  • modclub
  • JUstTest
  • Durango
  • everett
  • Leos
  • provamag3
  • mdbf
  • ngwrru68w68
  • cisconetworking
  • tester
  • osvaldo12
  • megavids
  • khanakhh
  • normalnudes
  • lostlight
  • All magazines