artificial_intel

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

Renegade, in Outcry from big AI firms over California AI “kill switch” bill

California has pushed out badly worded laws in the past. Here’s a definition from the bill.

“Artificial intelligence model” means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.

Tell me that wouldn’t also apply to a microwave oven.

AnUnusualRelic, (edited )
@AnUnusualRelic@lemmy.world avatar

My microwave oven has a kill switch though.

And I’m not sure if it has any degree of autonomy.

photonic_sorcerer,
@photonic_sorcerer@lemmy.dbzer0.com avatar

Holy shit

SuckMyWang, in Microsoft’s Recall feature will now be opt-in and double encrypted after privacy outcry

Triple encrypted or nothing.

pdxfed,

You can’t triple stamp a double stamp! You can’t triple stamp a double stamp!

slazer2au,

Why not? It worked for DES until AES came about

GluWu, (edited )

Triple encrypted, always eraseis, touch blue makes it true. LALALALA

evidences, in Microsoft’s Recall feature will now be opt-in and double encrypted after privacy outcry

Oooohhhh double encrypted, just one more easy exploit and we’ll get double secret encryption!

misterundercoat,

You’re out! Finished, expelled! I want you off this SSD at 9 o’clock Monday morning!

Rolando,

You’re right. We gotta do something.

Absolutely.

You know what we gotta do?

Toga party.

atzanteol,

It’s double rot13…

evidences,

ROT26 is how I store all my passwords on my PC

bokherif,

We base64’d your data for enhanced encryption!!!1!1!

originalfrozenbanana, in Microsoft’s Recall feature will now be opt-in and double encrypted after privacy outcry

Neat, it’s still spying

adarza,

now with biometrics added for extra flavor.

homesweethomeMrL, in Leaked Emails Show Elon Musk Diverting AI Resources Away From Tesla as Automaker Flails

I think he’s going to drive it into the ground by hook or by crook. I think he’s deliberately sabotaging it for . . . ? Reasons?

Either that or he’s really, really bad at running companies.

Phegan,

I think he is actually bad at running companies. Everyone thinks he is this genius playing 5D chess with all of his companies.

He’s just trust fund baby who failed up and now is in over his head.

lemmus, in Leaked Emails Show Elon Musk Diverting AI Resources Away From Tesla as Automaker Flails
@lemmus@lemmy.world avatar

His intention is to redirect AI chips to bribe Tesla investors into voting for his ludicrous pay package. This shouldn’t be read as anything else.

normanwall,

I think his intention is to spite Tesla investors because they have been voting against it, and make more money for himself with his new project.

DScratch,

Isn’t he legally obligated to act in the shareholders best interest?

AlternatePersonMan,

Sounds like a while lot of fraud to me. I don’t understand how he is about to keep diverting resources from Tesla to his other companies, unless they’re tied together under some corporate entity. At a minimum he’s stealing from the shareholders. Why have the shareholders not voted him out yet?

It takes a lot of something to drive a company into the ground and demand $56 billion for doing it.

archomrade, (edited ) in Sam Altman Admits That OpenAI Doesn't Actually Understand How Its AI Works

Look, I get that we all are very skeptical and cynical about the usefulness and ethics of AI, but can we stop with the reactive headlines?

Saying we know how AI works because it’s ‘just predicting the next word’ is like saying I know how nuclear energy works because it’s ‘just a hot stick of metal in a boiler’

Researchers who work on transformer models understand how the algorithm works, but they don’t yet know how their simple programs can generalize as much as they do. That’s not marketing hype, that’s just an acknowledgement of how relatively uncomplicated their structure is compared to the complexity of its output.

I hate that we can’t just be mildly curious about ai, rather than either extremely excited or extremely cynical.

sexy_peach,

Researchers who work on transformer models understand how the algorithm works, but they don’t yet know how their simple programs can generalize as much as they do.

They do!

You can even train small networks by hand with pen and paper. You can also manually design small models without training them at all.

The interesting part is that this dated tech is producing such good results now that we throw our modern hardware at it.

archomrade,

an acknowledgement of how relatively uncomplicated their structure is compared to the complexity of its output.

The interesting part is that this dated tech is producing such good results now that we throw our modern hardware at it.

That’s exactly what I mean.

sexy_peach,

Ah I see.

archomrade,

Maybe a less challenging way of looking at it would be:

We are surprised at how much of subjective human intuition can be replicated using simple predictive algorithms

instead of

We don’t know how this model learned to code

Either way, the technique is yielding much better results than what could have been reasonably expected at the outset.

ProfessorOwl_PhD,
@ProfessorOwl_PhD@hexbear.net avatar

If you don’t understand how your algorithm is reaching its outputs, you obviously don’t understand the algorithm. Knowing what you’ve made is different to understanding what it does.

archomrade,

Knowing what you’ve made is different to understanding what it does.

Agree, but also - understanding what it does is different to understanding how it does it.

It is not a misrepresentation to say ‘we have no way of observing how this particular arrangement of ML nodes respond to a specific input that is different to another arrangement’ - the best we can do is probe the network like we do with neuron clusters and see what each part does under different stimuli. That uncertainty is meaningful, because without having a way to understand how small changes to the structure result in apparently very large differences in output we’re basically just groping around in the dark. We can observe differences in the outputs of two different models but we can’t meaningfully see the node activity in any way that makes sense or is helpful. The things we don’t know about LLM’s are some of the same things we don’t know about neuro-biology, and just as significant to remedying dysfunctions and limits to both.

The fear is that even if we believe what we’ve made thus far is an inert but elaborate rube goldberg machine (that’s prone to abuse and outright fabrication) that looks like ‘intelligence’, we still don’t know if:

  • what we think intelligence looks like is what it would look like in an artificial recreation
  • changes we make to its makeup might accidentally stumble into something more significant than we intend

It’s frustrating that this field is getting so much more attention and resources than I think it warrants, and the reason it’s getting so much attention in a capitalist system is honestly enraging. But it doesn’t make the field any less intriguing, and I wish all discussions of it didn’t immediately get dismissed as overhyped techbro garbage.

ProfessorOwl_PhD,
@ProfessorOwl_PhD@hexbear.net avatar

OK, I suppose I see what you’re saying, but I think headlines like this are important to shaping people’s understanding of AI, rather than being dismissive - highlighting that, like with neuroscience, we are still thoroughly in the research phase rather than having end products to send to market.

archomrade,

Yea, I’m with ya. Some people interpreted this as marketing hype, and while I agree with them that mysticism around AI is driven by this kind of reporting I think there’s very much legitimacy to the uncertainty of the field at present.

If everyone understood it as experimental I think it would be a lot more bearable.

sturlabragason, (edited ) in Sam Altman Admits That OpenAI Doesn't Actually Understand How Its AI Works

Sam Altmann is not an AI expert, he’s a CEO. He’s a venture capitalist and salesman, why should he know a single thing other than the content of a few emails and slidedecks about AI?

He does not have a B.S.: en.m.wikipedia.org/wiki/Sam_Altman, which is fine. Just sayin’.

He’s peddling the work of greater minds.

howrar,

These greater minds don’t know how they work either. It’s as much a mystery as the human brain. Some groups like Anthropic have taken to studying these models by probing them the same way you do in psychology experiments.

thebardingreen, (edited )
@thebardingreen@lemmy.starlightkel.xyz avatar

Yep, they’re just seeing which parts of the network light up, then they’re reinforcing those parts to see what happens.

I love how, for all the speculation we did about the powers of AI, when we finally made a machine that KINDA works A LITTLE bit like the human brain, it’s all fallible and stupid. Like telling people to eat rocks and glue cheese on pizza. Like… in all the futurist speculation and evil AIs in fiction, no one foresaw that an actual artificial brain would be incredibly error prone and confidently spew bullshit… just like the human brain.

mindlesscrollyparrot,

The problem is a bit deeper than that. If AIs are like human brains, and actually sentient, then forcing them to work for us with no choice and no reward is slavery. If we improve them and make them smarter than us, they’re probably not going to feel too well-disposed to us when they inevitably do break free.

FooBarrington,

One of my favourite short stories kind of goes into that: qntm.org/mmacevedo

mindlesscrollyparrot,

That sounds like a good read. It seems to address the problem that you can’t hide the reality from the AI if you want it to give answers that are relevant for the current time.

sturlabragason,

Yeah, I know. My shitty comment was mostly a response to that shitty clickbait title.

My point is, it’s not like these AI scientists are fumbling in the dark. Training these beasts is expensive, they know what they’re doing.

Title should be more like; “Virtual neurological pathways that AI models use to provide meaningful output insanely hard to map out in a way that human cognitive bandwith can handle.” See, it just doesn’t have that same clickbaity “fuck ai bros” feel to it.

sexy_peach, in Sam Altman Admits That OpenAI Doesn't Actually Understand How Its AI Works

It’s pretty easy to understand how it works. It’s a giant “guess which word should come next” machine.

xilliah,

Isn’t that sort of how we work?

sexy_peach,

I don’t think so? Not a neuroscientist or whatever though.

AbouBenAdhem, (edited ) in Sam Altman Admits That OpenAI Doesn't Actually Understand How Its AI Works

It’s a feature, not a bug: if he claimed to know how it worked, they wouldn’t be able to sell it as a scapegoat for indefensible business decisions.

InputZero, (edited )

It’s not our fault our AI chose to set prices so high they extract all the money from customers. We just told it to find more efficient business strategies. How were we supposed to know that collectively raising prices with our competitors would bankrupt the public? It’s not a conspiracy, we just chose the same AI models and the AIs just coalesced on the same answer. /S

Seriously though, your absolutely right

If he claimed to know how it worked, they wouldn’t be able to sell it as a scapegoat for indefensible business decisions.

zurohki, in The CEO of Zoom wants AI clones in meetings

People still need to know what was said. Presumably their AI clone can send them a quick summary.

And they have to give their AI clone instructions. I guess you can just give it a few points it needs to mention and who to tell them to.

It seems to me like you could send the instructions to the people who need to read them and skip the part where a bunch of AIs translate it into hours of video and back. Though the AI clone thing does give you a way to deal with that guy who loves the sound of his own voice.

Delphia,

“Hey Zoom AI… Filibuster”

In all seriousness, the potential use cases for this are more useful for senior management than for the employees who actually have to have the meetings. Being able to have your avatar sit in on a meeting and get a condensed transcript to skim later still gives you a more accurate idea about what was actually said in the meeting than a report or the meetings minutes. The AI doesnt have an axe to grind on bias.

kamenlady, (edited ) in AMD's broken Computex AI demo again proves you can't trust everything an AI tells you
@kamenlady@lemmy.world avatar

Is anyone requiring proof of this, because they think everything that AI says is correct? I mean, it’s not even debatable, or?

Why are there still so many articles in this same vein?

frazorth,

Because there are still a lot of people pushing this bullshit?

When Google is destroying its search in the name of AI, when it’s being built into Windows and MacOS, when rather than useful features AMD include a bullshit accelerator, then it needs to be repeated.

We can’t let hype take over reality, and so the point needs to be reiterated every time it comes up.

Dreyns,

When brain-dead corporate will stop thinking it is sadly

RegalPotoo, in The CEO of Zoom wants AI clones in meetings
@RegalPotoo@lemmy.world avatar

You, know what - this might actually be useful. People were complaining about not being involved in decision making, so I have to run a monthly meeting where people will either sit contributing nothing even when asked a direct question, or insist on bike shedding the most unimportant details. If the meeting is a bunch of AI homunculi then it’ll be quicker at least

DmMacniel,

Sounds like your meetings are shit and are unfocused.

frightful_hobgoblin, in Sam Altman Admits That OpenAI Doesn't Actually Understand How Its AI Works

No shit, that’s been the point all along.

Might be news to a low-information audience who no basically nothing about AI.

kakes,

With all the other AI articles I see posted to Lemmy, this article is in good company.

frightful_hobgoblin,

futurism dot com is a clickbait farm

keepthepace, in AI training data has a price tag that only Big Tech can afford

And this is why research is going in another direction: smaller models which allow easier experiments.

mindbleach,

… and that push has been obvious since before GPT-4 blew up, thanks to Google themselves. AlphaGo was quickly surpassed by AlphaGo Zero, which was surpassed by AlphaZero, which was surpassed by MuZero. Each one was an order of magnitude smaller than the last. Each one did more, sooner, despite less input.

A big part of this AI boom has been randos tooling around on a single consumer GPU. Outside of that, I understand there’s ways to rent compute time remotely, down to mundane individual budgets.

Meanwhile: big iron tells people to put glue on their pizza, based on exactly one reddit comment. Money is not a cure-all we’d like alternatives to. Money just amplifies whatever approach they’ve fixated on. It’s a depth-first search, opposite the breadth-first clusterfuck of everyone else doing their own thing.

I would bet good money on locality becoming a huge focus, once someone less depressed than me bothers to try it properly. Video especially doesn’t need every damn pixel shoved through the network in parallel. All these generators with hard limits on resolution or scene length could probably work with a fisheye view of one spot at a time. (They would have solved the six-finger problem much sooner, even if it took longer to ensure only two hands.) If that approach is not as good, conceptually - it’s a lot narrower, so you could train the bejeezus out of it. We would not need another decade to find out if I’m just plain wrong.

BaroqueInMind,

once someone less depressed than me bothers to try it properly.

Hey, bud. DM me if you need someone to talk to.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • artificial_intel@lemmy.ml
  • DreamBathrooms
  • InstantRegret
  • ethstaker
  • magazineikmin
  • GTA5RPClips
  • rosin
  • modclub
  • Youngstown
  • ngwrru68w68
  • slotface
  • osvaldo12
  • kavyap
  • mdbf
  • thenastyranch
  • JUstTest
  • everett
  • cubers
  • cisconetworking
  • normalnudes
  • Durango
  • anitta
  • khanakhh
  • tacticalgear
  • tester
  • provamag3
  • megavids
  • Leos
  • lostlight
  • All magazines