Futurism wanted to know: What kind of a company creates fake authors for a newspaper or magazine and operates them like sock puppets? What they discovered "should alarm anyone who cares about a trustworthy and ethical media industry." https://flip.it/oxcxAC #Tech#Technology#AI#Journalism
In this week’s Disconnect Roundup, Apple delivered an insult to life itself by crushing all human creativity into an iPad Pro in its recent ad. Plus, recommended reads, labor updates, and other news you might have missed.
“Mario Fusco, a great software developer, once said: ‘The code you write makes you a programmer. The code you delete makes you a good one. The code you don’t have to write makes you a great one.’ So maybe, for once, AI is on to something.”
We have an exciting main event at #HCXAI at #chi2024 today!
We have @janethaven from @datasociety and
Kush Varshney from @ibmresearch for an invigorating discussion on AI governance and policymaking to take Explainable AI beyond academia.
What if any Fediverse instance's make this DRM that Facebook has, with posts being locked for users not signed in. This can help with those who are scared of AI “taking over” because let's be real, AI just grabs the information you post online. If we do this, would AI “die”?
I don't personally care so to speak, as AI will not care, or sorry, these companies developing the AI LLM's will counter new strategies to obtain PII from websites.
Imagine an AI haiku writing program and a master haiku artist.
The artist works with meaning and intent. Some things are popular, some not, but everything was chosen to communicate something.
The AI is matching words that frequently go together, has no concept of meaning, and spews high speed gibberish. BUT, the people who vote up or down on results are selecting works that seem like they could be 'deep'.
Given the top 5 from both, could you tell the difference?
Eine sehr gute Toot-Reihe von @bkastl über das kommerzielle Ausnutzen der kostenlosen Inputs von User*innen auf Stack Overflow über ChatGPT von OpenAI:
#AI#GenerativeAI#AIRegulation#TechPolicy: "The overwhelming message that emerges from these books, ironic as it may seem, is a newfound appreciation of the collective powers of human creativity. We rightly marvel at the wonders of AI, but still more astonishing are the capabilities of the human brain, which weighs 1.4kg and consumes just 25 watts of power. For good reason, it has been called the most complex organism in the known universe.
As the authors admit, humans are also deeply flawed and capable of great stupidity and perverse cruelty. For that reason, the technologically evangelical wing of Silicon Valley actively welcomes the ascent of AI, believing that machine intelligence will soon supersede the human kind and lead to a more rational and harmonious universe. But fallibility may, paradoxically, be inextricably intertwined with intelligence. As the computer pioneer Alan Turing noted, “If a machine is expected to be infallible, it cannot also be intelligent.” How intelligent do we want our machines to be?"
#AI#GenerativeAI#Police#Surveillance: "Moreover, if the AI-generated report is incorrect, can we trust police will contradict that version of events if it's in their interest to maintain inaccuracies? On the flip side, might AI report writing go the way of AI-enhanced body cameras? In other words, if the report consistently produces a narrative from audio that police do not like, will they edit it, scrap it, or discontinue using the software altogether?
And what of external reviewers’ ability to access these reports? Given police departments’ overly intense secrecy, combined with a frequent failure to comply with public records laws, how can the public, or any external agency, be able to independently verify or audit these AI-assisted reports? And how will external reviewers know which portions of the report are generated by AI vs. a human?
Police reports, skewed and biased as they often are, codify the police department’s memory. They reveal not necessarily what happened during a specific incident, but what police imagined to have happened, in good faith or not. Policing, with its legal power to kill, detain, or ultimately deny people’s freedom, is too powerful an institution to outsource its memory-making to technologies in a way that makes officers immune to critique, transparency, or accountability." https://www.eff.org/deeplinks/2024/05/what-can-go-wrong-when-police-use-ai-write-reports
#AI#GenerativeAI#Chatbots#HR#JobMarket: "Job seekers, frustrated with corporate hiring software, are using artificial intelligence to craft cover letters and résumés in seconds, and deploying new automated bots to robo-apply for hundreds of jobs in just a few clicks. In response, companies are deploying more bots of their own to sort through the oceans of applications.
The result: a bot versus bot war that’s leaving both applicants and employers irritated and has made the chances of landing an interview, much less a job, even slimmer than before.
“You’re fighting AI with AI,” said Brad Rager, chief executive of Crux, a recruiting firm that matches cybersecurity specialists with employers.
The AI arms race is bad for job candidates, he said, who feel defeated when online applications come to nothing, and for employers, who are frustrated when imprecise AI tools highlight weak candidates. “There’s so much promise, but there’s a lot of crap and garbage,” Rager said of the tools used by employers."
Is your #school and/or #district facing the #ESSER cliff combined with other downward #budget pressure? We could be facing a pivotal moment in #education as the landscape faces remarkable forces for/of change, across #technology ( not just but including #AI of course, with #HCI and #cybersecurity ) and cultural shifts, politics generally, social-emotional development in Society, and so much more ... Here is the thing. It's going to work out. But put on your seatbelt, it might get a little bumpy.
Ruszyła zbiórka fundacji non-profit @ftdl na dodatkowy sprzęt dla generatora napisów i transkrypcji po polsku 🇵🇱 czyli NapiGen 🚀 oraz kolejnych projektów LLM.
Pomożecie rozwiązać problem braku polskich napisów w większości treści na YouTube i transkrypcji w podkastach?
P.S. Wszystko jest lub będzie open-source, we własnej serwerowni fundacji w Krakowie, na własnym sprzęcie, żadnych "wycieków" na zewnątrz czy "darmowego" douczania amerykańskich korpo ejajów.
I see a lot of #Blind people ‘apologizing’ for using #AI – I am not going to be doing that. Not now and not ever.
We need to have a conversation about how these models have been trained and how they are used going forward, but shaming disabled people for taking any chance they can to mitigate some of the challenges they face in their lives every day, is a ‘privilege’ we don’t all enjoy.
When you post an image and choose not to add alt text, publish an inaccessible PDF, release an inaccessible app - these are all choices! Maybe a tiny part of the righteous outrage that some of these people are spewing could be aimed at that?
#AI#GenerativeAI#Automation#Unemployment#ViceMedia#Journalism#Media#Apple#iPad: "On the one hand it is really wild to me that Apple would miscalculate on something so drastically — Apple prides itself on its marketing, which is as crucial to the company as any of its technologies — but on the other, I’m thankful. The ad has clarified some things: amid (yet another) week in which human writers and artists were watching their work degraded into content fodder, Apple came along and handed us a perfect visual metaphor for one of our most potent fears about big tech right now — namely, that it is crushing the arts and transmuting them into dull consumer products. And, I might add, they are so content with what they are doing, that they are more than happy to broadcast that intent explicitly via advertising — signed off on by the highest echelons of Apple, and Tim Cook himself tweeting it out — with an exclamation point in the title. “Crush!” indeed." https://www.bloodinthemachine.com/p/for-artists-writers-humans-big-techs
"While it may seem harmless if #AI systems cheat at games, it can lead to "breakthroughs in deceptive AI capabilities" that can spiral into more advanced forms of AI deception in the future, Park added.
"Some AI systems have even learned to cheat tests designed to evaluate their safety, the researchers found. In one study, AI organisms in a digital simulator "played dead" in order to trick a test built to eliminate AI systems that rapidly replicate.
"By systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead us humans into a false sense of security," says Park."
"The most striking example of #AI deception the researchers uncovered in their analysis was Meta's CICERO, an AI system designed to play the game Diplomacy, which is a world-conquest game that involves building alliances. Even though Meta claims it trained CICERO to be "largely honest and helpful" and to "never intentionally backstab" its human allies while playing the game, the data the company published along with its Science paper revealed that CICERO didn't play fair.
"We found that Meta's AI had learned to be a master of deception," says Park. "While Meta succeeded in training its AI to win in the game of Diplomacy -- CICERO placed in the top 10% of human players who had played more than one game -- Meta failed to train its AI to win honestly."
Well, on the plus side, all the vast number of pages of AI slop you search for on topics nowadays sound the same. So you can quickly move on to a page written by humans.
"You're interested in learning more about X? Well, it's a fascinating topic. Here, we'll talk about all the ways you can do X! The history of X is fascinating..." bleah bleah bleah