tero, to random
@tero@rukii.net avatar

Most May Be , Not Life as We Know It -

"Human intelligence may be just a brief phase before machines take over. That may answer where the aliens are hiding."

https://www.scientificamerican.com/article/most-aliens-may-be-artificial-intelligence-not-life-as-we-know-it/

HistoPol,
@HistoPol@mastodon.social avatar

@voron @tero @xtaldave

(3/n)

...source community's democratization approach of 's now outpacing even -AI's development leaps. (See first link) This is the evolutionary booster predicted at the end of the last century as "humanity has evolved into a socio-technological complex."

" In other words,...

arstechnica, to random
@arstechnica@mastodon.social avatar

Lawyer cited 6 fake cases made up by ChatGPT; judge calls it “unprecedented”

Judge weighs punishment for lawyer who didn't bother to verify ChatGPT output.

https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

internic,
@internic@qoto.org avatar

@arstechnica '... he asked the AI tool whether Varghese is a real case. ChatGPT answered that it "is a real case" ... When asked if the other cases provided by ChatGPT are fake, it answered, "No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw." '

If accurate, to me this illustrates the core of the issue with many people's use of so-called tools: this lawyer bought the hype that these are verging on human-like intelligence and could understand and answer his question, while in reality they were only constructing a statistically probable sequence of words that might be written in response to such a query, with no relationship to the facts of the matter.

thomasfuchs, to random
@thomasfuchs@hachyderm.io avatar

The “AI” we have can’t think and doesn’t have concepts of anything and doesn’t know anything.

Therefore it can’t “attack its enemies”, for the AI there is no such thing as enemies, it literally anything.

All it is is generated formulas based on statistical analysis.

This works great for some use cases, and really not at all for others.

Definitely it will not try to kill you or anyone else. It doesn’t know or understand that there is a you.

Pampa,

@thomasfuchs

The threat is far more profound and insidious than you imagine.

Watch Yuval Noah Harari's talk here and ponder.

https://www.youtube.com/watch?v=LWiM-LuRe6w&pp=ygUUeXV2YWwgbm9haCBoYXJhcmkgYWk%3D

judell, to random
@judell@social.coop avatar

I'm downloading Descript as I write this. My first intro to text as the interface to audio editing (such a profound and wonderful idea!) was courtesy of Laurian Gridinoc years ago, I've been eagerly awaiting more general availability.

https://cogdogblog.com/2023/05/changing-up-decripting-my-podcast-methods-eh-ai-eh/

See also https://blog.jonudell.net/2022/09/25/curating-the-studs-terkel-archive/ about remixing the Studs Terkel archive using Laurian's https://hyper.audio/. I wonder if Descript uses the same or a different foundation?

Roundtrip,
@Roundtrip@federate.social avatar

@judell @cogdog

I enjoyed the podcast and your conversation on Descript for synchronized text+audio editing.

Reading the very clean transcript including a few ‘disfluencies’ — like ‘um’ — made me think of this ‘About Language’ podcast: http://becauselanguage.com/76-ooo-yum-uh/

TLDR ‘Um’s are helpful markers to signal pauses while thinking. Host Daniel Midgley decided to leave them in transcripts to keep a natural tone. Some ’s pick this up as well.

uzayran, to random
@uzayran@cyberplace.social avatar
cpi,
@cpi@mastodon.social avatar

@uzayran we need @web3isgreat but for techbros failing to replace humans with

randomgeek, to random

I'm trying. I'm really trying to give Copilot a chance.

So far it's been suggesting reasonable Python code suggestions. Not great, but reasonable.

Too bad I was asking it about Raku and Nushell.

mjgardner,
@mjgardner@social.sdf.org avatar
lauren, to random
@lauren@mastodon.laurenweinstein.org avatar

We shouldn't be worrying about AI wiping out humanity. That's a smokescreen. That's sci-fi. We need to worry about the individuals now and in the near future who can be hurt by the premature deployment of generative AI systems that spew wrong answers and lies, and then when asked for confirmation, lie about their own lies! And just popping up warnings to users is useless, because you know and I know that hardly anyone will read those warnings or pay any attention to them whatsoever.

cazabon,

@lauren

The warnings are also cold comfort to the loved ones of the guy the vigilante gang beat to death because some generated content somewhere labelled him an unrepentant pedophile in their neighbourhood.

I don't know what the solution is. Mandate that an indicator of its origin be prominently displayed within the generated text? How could that work?

lauren, to random
@lauren@mastodon.laurenweinstein.org avatar

All these years later, and people are still terrified of HAL.

cazabon,

@lauren

We need a ChatGPT-type LLM with a conversational mode, feeding a text-to-speech engine with Douglas Rain's voice.

Maybe if people heard "AI" bullshit with Hal's voice, they'd be a little more skeptical about its content ?

yakky, to programming
@yakky@mastodon.social avatar

I just got the weirdest CV cover letter ever for a developer role, mixing Adrian Newey (the Adrian Newey of F1 fame), Hitler (that Hitler) and epigenetics (wtf epigenetics has to do with software engineering??). Quickest rejection ever

cazabon,

@yakky

ChatGPT is branching out.

Apparently hiring managers are gettings tons of these -generated CVs and cover letters for any position that can possibly be done remotely.

schmod, to random

Sam Altman and OpenAI's stance on AI ethics is so strange. It's like watching the CEO of Exxon give a passionate speech about the dangers of climate-change from the deck of a brand-new oil platform.

mkarliner, (edited )
@mkarliner@mastodon.modern-industry.com avatar

@schmod
@futurism
I like the fantasy that this can be controlled anyway.
If we can't agree on management of dangers that have been with us for decades, how are you going to police AI?

Pandora's box is firmly open and can't be shut.

I'm going off to my extinct volcano, with en suite submarine pen, to dream up a dastardly AI to threaten James Bond with. Just let them stop me!

Now, where are my minions?


nhamiel, to ChatGPT

When it comes to issues of confidentiality and , takes up all of the oxygen, but anyone with a credit card has access to a hosted API. Accessibility means a lot of sensitive data is flowing out of organizations without their knowledge. https://modernciso.com/2023/06/01/more-than-chatgpt-privacy-and-confidentiality-in-the-age-of-llms/

Jigsaw_You, (edited ) to ai Dutch
@Jigsaw_You@mastodon.nl avatar

Spot-on article on the threat of powered .

“Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us.” - Daniel C. Dennett

https://www.theatlantic.com/technology/archive/2023/05/problem-counterfeit-people/674075/

jbzfn, to ai
@jbzfn@mastodon.social avatar

"Do I gamble that the flood of language model texts will put a premium on thoughtful writing? That I’m not just improving the models by putting more writing out in the world?

Do I figure out ways of putting more of my writing behind some sort of pay- or login wall, even though that would be counterproductive for my career? Wouldn’t that also just disconnect me from my friends and the online community in general?"
@baldur


https://www.baldurbjarnason.com/2023/tech-broke-the-webs-social-contract/

jbzfn, to random
@jbzfn@mastodon.social avatar

"If you google around enough, you’ll find people talking about using LangChain to chain together LLM calls and get better outputs. However, chaining calls to an LLM just makes the latency problem with LLMs worse, which is a nonstarter for us. But even if it wasn’t, we have the potential to get bitten by compound probabilities."
@honeycombio


https://www.honeycomb.io/blog/hard-stuff-nobody-talks-about-llm

timswan, to ai
@timswan@mastodon.social avatar

It’s hard to believe the marketing momentum behind when auto-correct is so fucking bad, and Instagram pushes bullshit at me if I pause for a second on a post. No, Instagram, my mistaken like on a post doesn’t mean that’s all I want to see for the next week.

judell, to random
@judell@social.coop avatar

"Externalizing your thinking in that way is intrinsically valuable. But when the rubber duck talks back, it’s a whole new game."

https://www.infoworld.com/article/3697653/when-the-rubber-duck-talks-back.html

jh, to ai German
@jh@swiss.social avatar

: @derpostillon veröffentlicht die "erste wirklich menschliche KI": https://www.der-postillon.com/2023/05/deppgpt.html 😂

astro_jcm, to ChatGPT
@astro_jcm@mastodon.online avatar

Whenever you think that something spat out by or another makes sense, remember that you are the one making sense out of it and not the LLM itself.

tero, to random
@tero@rukii.net avatar

Explaining tasks to chatbots is not unlike making complicated restaurant orders. Ordering custom food for three people in restaurants is something I am very used to.

There are always mistakes. If you want something without coriander, you're occasionally going to get everything without coriander.

You'll get wrong numbers of utensils for the starters unless you go for the most common options of "share all starters with everyone".

In restaurants, like in code, people learn to "keep-it-simple-stupid", KISS principle. With people generally don't perceive the complexities of the things they are asking, because it is clear to them, and they aren't reading what they wrote from the point of view of being the one who has to follow the instructions.

Just now I got Tom Yum with prawn in front of me even though I ordered one with chicken.

Waiters, like chatbots, become confused with complex orders. The way to handle this is to keep orders .

urbandata, to ChatGPT
Korny, to ai
@Korny@hachyderm.io avatar

Interesting - are working on a top 10 list of security risks in applications: https://owasp.org/www-project-top-10-for-large-language-model-applications/

peterbutler, to ai
@peterbutler@mas.to avatar
davidaugust, to ai
@davidaugust@mastodon.online avatar

Artificial Intelligence has no understanding, yet.
Doesn’t mean they’re not impressive and useful.

Computers Don’t Understand - John Searle Chinese Room Argument (1984)
https://youtu.be/6tzjcnPsZ_w


trammel, to ai

"We've trained a model to achieve a new state-of-the-art in mathematical problem solving by rewarding each correct step of reasoning (“process supervision”) instead of simply rewarding the correct final answer (“outcome supervision”)."

https://openai.com/research/improving-mathematical-reasoning-with-process-supervision

schizanon, to ai

Y'all should be asking yourselves why all these rich fucks want us to be worried about when there are much more real and present threats to humanity they could be flipping out about.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • tester
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • ethstaker
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • ngwrru68w68
  • kavyap
  • GTA5RPClips
  • JUstTest
  • cisconetworking
  • InstantRegret
  • khanakhh
  • cubers
  • everett
  • Durango
  • tacticalgear
  • Leos
  • modclub
  • normalnudes
  • provamag3
  • anitta
  • lostlight
  • All magazines