Corporations can influence the directions and conclusions of academic research without corrupting the beliefs of any individual scientist. In a phenomenon known as industry selection bias, companies direct funding and/or data toward scientists who already support the research approaches or technological solutions favored by industry.
A new paper out in Synthese expands on the earlier Holman and Bruner model of this scenario to illustrate how this process works.
@ct_bergstrom interesting article - looking forward to reading this in depth.
The premise certainly runs true of #AI research. The research is largely funded by corporations with large amounts of data with uncertain quality where the cost of a bad decision is really low.
This ecosystem deprioritizes methods that could work in low resource setting or problems that require assured autonomy.
Messing around with Alpaca 13B on Dalai LLaMA and when I ask anything deeper than "please describe the taste of a papaya," the output strongly resembles the BS you'd get from a high school student who hadn't done the reading. (Hint: Heinlein didn't write "The Marching Morons." Kornbluth did.)
Diabolus ex Machina: The Creation of Adam, Michelangelo’s iconic fresco on the ceiling of the Sistine Chapel, depicts the Christian deity breathing life into the very first man. Today, a handful of incredibly rich men in Silicon Valley claim a similarly grandiose mission. This is, apparently, the dawn of the age of “artificial intelligence”.
According to its creators, this emerging tech is god-like in its abilities. But the devil is in the details. We spoke to several African AI researchers who see, instead, patterns of exploitation and extraction.
That's what happens when #bigTech promotes their #Ai products as autonomous all-knowing tools, and most mainstream media takes it without any critical thought...
1/ There's a real niche for apps or tools that do a good job of suggesting #news stories that match our interests.
I have experience with three that do a terrible job: #GoogleNews, #GoogleDiscover, and the #Twitter algorithm. All three give us options to say "more like this" or "less like this" for each story. All three of them systematically ignore my "less like this" feedback.
4/ #AI could play a role here. If the app pushes me a story on a new treatment for psoriasis, and I say "less like this", the app should not merely ask whether I want less on "science" or less on "health". It should be smart enough to suggest more fine-grained categories. It should be smart enough to let me propose my own categories.
Those who fear that #AI will take their jobs SHOULD lose their jobs to AI. Make room for progress. What? It’s inhumane? How many jobs did your car, laptop, phone and supermarket make redundant? Oh, now you get it!
#Apple has restricted employees from using #AI tools like #OpenAI’s #ChatGPT over fears confidential information entered into these systems will be leaked or collected.
I've helped so many people in my career to find work. Now it is me. I've been looking for a job for a long time. As a a person who calls on #tech and #ai to be accountable (and offers to help do that) you'd think they'd want me. But nope. Is anyone in my network hiring? Please. #fediverse#FediHire#fedihired
Texas A&M professor fails students because he asked ChatGPT if it wrote their papers and it falsely said yes. They had already graduated and now can’t get their diplomas. 🤦🏾♂️
An example of how the real threat of AI is people applying the technology in the wrong situations because they don’t understand how it works.
We can't stop students from using #AI and shouldn't even try. It's a powerful tool and we need to adjust classes and assignments to help students make the best use of this powerful new tech, rather than fight it.
When I was in school many teachers thought using Wikipedia was cheating, rather than a useful tool , if used well. I think that was backward logic in the same way that fighting #ChatGPT in #education is today.