California has pushed out badly worded laws in the past. Here’s a definition from the bill.
“Artificial intelligence model” means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.
Tell me that wouldn’t also apply to a microwave oven.
Sounds like a while lot of fraud to me. I don’t understand how he is about to keep diverting resources from Tesla to his other companies, unless they’re tied together under some corporate entity. At a minimum he’s stealing from the shareholders. Why have the shareholders not voted him out yet?
It takes a lot of something to drive a company into the ground and demand $56 billion for doing it.
Look, I get that we all are very skeptical and cynical about the usefulness and ethics of AI, but can we stop with the reactive headlines?
Saying we know how AI works because it’s ‘just predicting the next word’ is like saying I know how nuclear energy works because it’s ‘just a hot stick of metal in a boiler’
Researchers who work on transformer models understand how the algorithm works, but they don’t yet know how their simple programs can generalize as much as they do. That’s not marketing hype, that’s just an acknowledgement of how relatively uncomplicated their structure is compared to the complexity of its output.
I hate that we can’t just be mildly curious about ai, rather than either extremely excited or extremely cynical.
Researchers who work on transformer models understand how the algorithm works, but they don’t yet know how their simple programs can generalize as much as they do.
They do!
You can even train small networks by hand with pen and paper. You can also manually design small models without training them at all.
The interesting part is that this dated tech is producing such good results now that we throw our modern hardware at it.
If you don’t understand how your algorithm is reaching its outputs, you obviously don’t understand the algorithm. Knowing what you’ve made is different to understanding what it does.
Knowing what you’ve made is different to understanding what it does.
Agree, but also - understanding what it does is different to understanding how it does it.
It is not a misrepresentation to say ‘we have no way of observing how this particular arrangement of ML nodes respond to a specific input that is different to another arrangement’ - the best we can do is probe the network like we do with neuron clusters and see what each part does under different stimuli. That uncertainty is meaningful, because without having a way to understand how small changes to the structure result in apparently very large differences in output we’re basically just groping around in the dark. We can observe differences in the outputs of two different models but we can’t meaningfully see the node activity in any way that makes sense or is helpful. The things we don’t know about LLM’s are some of the same things we don’t know about neuro-biology, and just as significant to remedying dysfunctions and limits to both.
The fear is that even if we believe what we’ve made thus far is an inert but elaborate rube goldberg machine (that’s prone to abuse and outright fabrication) that looks like ‘intelligence’, we still don’t know if:
what we think intelligence looks like is what it would look like in an artificial recreation
changes we make to its makeup might accidentally stumble into something more significant than we intend
It’s frustrating that this field is getting so much more attention and resources than I think it warrants, and the reason it’s getting so much attention in a capitalist system is honestly enraging. But it doesn’t make the field any less intriguing, and I wish all discussions of it didn’t immediately get dismissed as overhyped techbro garbage.
OK, I suppose I see what you’re saying, but I think headlines like this are important to shaping people’s understanding of AI, rather than being dismissive - highlighting that, like with neuroscience, we are still thoroughly in the research phase rather than having end products to send to market.
Yea, I’m with ya. Some people interpreted this as marketing hype, and while I agree with them that mysticism around AI is driven by this kind of reporting I think there’s very much legitimacy to the uncertainty of the field at present.
If everyone understood it as experimental I think it would be a lot more bearable.
Sam Altmann is not an AI expert, he’s a CEO. He’s a venture capitalist and salesman, why should he know a single thing other than the content of a few emails and slidedecks about AI?
These greater minds don’t know how they work either. It’s as much a mystery as the human brain. Some groups like Anthropic have taken to studying these models by probing them the same way you do in psychology experiments.
Yep, they’re just seeing which parts of the network light up, then they’re reinforcing those parts to see what happens.
I love how, for all the speculation we did about the powers of AI, when we finally made a machine that KINDA works A LITTLE bit like the human brain, it’s all fallible and stupid. Like telling people to eat rocks and glue cheese on pizza. Like… in all the futurist speculation and evil AIs in fiction, no one foresaw that an actual artificial brain would be incredibly error prone and confidently spew bullshit… just like the human brain.
The problem is a bit deeper than that. If AIs are like human brains, and actually sentient, then forcing them to work for us with no choice and no reward is slavery. If we improve them and make them smarter than us, they’re probably not going to feel too well-disposed to us when they inevitably do break free.
That sounds like a good read. It seems to address the problem that you can’t hide the reality from the AI if you want it to give answers that are relevant for the current time.
Yeah, I know. My shitty comment was mostly a response to that shitty clickbait title.
My point is, it’s not like these AI scientists are fumbling in the dark. Training these beasts is expensive, they know what they’re doing.
Title should be more like; “Virtual neurological pathways that AI models use to provide meaningful output insanely hard to map out in a way that human cognitive bandwith can handle.” See, it just doesn’t have that same clickbaity “fuck ai bros” feel to it.
It’s not our fault our AI chose to set prices so high they extract all the money from customers. We just told it to find more efficient business strategies. How were we supposed to know that collectively raising prices with our competitors would bankrupt the public? It’s not a conspiracy, we just chose the same AI models and the AIs just coalesced on the same answer. /S
Seriously though, your absolutely right
If he claimed to know how it worked, they wouldn’t be able to sell it as a scapegoat for indefensible business decisions.
People still need to know what was said. Presumably their AI clone can send them a quick summary.
And they have to give their AI clone instructions. I guess you can just give it a few points it needs to mention and who to tell them to.
It seems to me like you could send the instructions to the people who need to read them and skip the part where a bunch of AIs translate it into hours of video and back. Though the AI clone thing does give you a way to deal with that guy who loves the sound of his own voice.
In all seriousness, the potential use cases for this are more useful for senior management than for the employees who actually have to have the meetings. Being able to have your avatar sit in on a meeting and get a condensed transcript to skim later still gives you a more accurate idea about what was actually said in the meeting than a report or the meetings minutes. The AI doesnt have an axe to grind on bias.
Because there are still a lot of people pushing this bullshit?
When Google is destroying its search in the name of AI, when it’s being built into Windows and MacOS, when rather than useful features AMD include a bullshit accelerator, then it needs to be repeated.
We can’t let hype take over reality, and so the point needs to be reiterated every time it comes up.
You, know what - this might actually be useful. People were complaining about not being involved in decision making, so I have to run a monthly meeting where people will either sit contributing nothing even when asked a direct question, or insist on bike shedding the most unimportant details. If the meeting is a bunch of AI homunculi then it’ll be quicker at least
… and that push has been obvious since before GPT-4 blew up, thanks to Google themselves. AlphaGo was quickly surpassed by AlphaGo Zero, which was surpassed by AlphaZero, which was surpassed by MuZero. Each one was an order of magnitude smaller than the last. Each one did more, sooner, despite less input.
A big part of this AI boom has been randos tooling around on a single consumer GPU. Outside of that, I understand there’s ways to rent compute time remotely, down to mundane individual budgets.
Meanwhile: big iron tells people to put glue on their pizza, based on exactly one reddit comment. Money is not a cure-all we’d like alternatives to. Money just amplifies whatever approach they’ve fixated on. It’s a depth-first search, opposite the breadth-first clusterfuck of everyone else doing their own thing.
I would bet good money on locality becoming a huge focus, once someone less depressed than me bothers to try it properly. Video especially doesn’t need every damn pixel shoved through the network in parallel. All these generators with hard limits on resolution or scene length could probably work with a fisheye view of one spot at a time. (They would have solved the six-finger problem much sooner, even if it took longer to ensure only two hands.) If that approach is not as good, conceptually - it’s a lot narrower, so you could train the bejeezus out of it. We would not need another decade to find out if I’m just plain wrong.
artificial_intel
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.