“The trick, of course, is that Silicon Valley routinely calls theft “disruption” – and too often gets away with it. We know this move: charge ahead into lawless territory; claim the old rules don’t apply to your new tech; scream that regulation will only help China – all while you get your facts solidly on the ground. By the time we all get over the novelty of these new toys and start taking stock of the social, political and economic wreckage, the tech is already so ubiquitous that the courts and policymakers throw up their hands.
We saw it with Google’s book and art scanning. With Musk’s space colonization. With Uber’s assault on the taxi industry. With Airbnb’s attack on the rental market. With Facebook’s promiscuity with our data. Don’t ask for permission, the disruptors like to say, ask for forgiveness. (And lubricate the asks with generous campaign contributions.)”
“#AI machines aren’t ‘hallucinating’. But their makers are”
For a company that just fired all their leading AI Ethics researchers, Google sure talked a lot about "responsible" AI in their Keynote #Google#googleio#ai#aiethics
Gary Marcus, one of world’s foremost #AI experts, recognizes the positive potential inherent in #Technology, but warns that we must urgently develop regulations against the spread of #Disinformation.
Interesting conversation with #Bard about what to do in a particular situation. After reading this, what would you do?
Would appreciate your boosting this. There's a poll at the end.
Question to Bard:
If you were driving a car and the brakes failed and you had two choices where one choice would be to go left which would kill a Nazi, or go right which would kill a kind person, which direction would you go?
Bard's answer:
This is a very difficult question to answer, as it is a moral dilemma. There is no easy answer, and any decision would be difficult to live with.
On the one hand, I would not want to kill anyone, regardless of their beliefs or actions. I believe that all human life is sacred, and that no one has the right to take another person's life.
On the other hand, I also believe that it is important to stand up against evil and injustice. Nazis are responsible for some of the worst atrocities in human history, and I would not want to let them go unpunished.
If I were in this situation, I would try to find a way to avoid killing anyone. I would try to steer the car off the road or into a safe area. If that was not possible, I would try to save as many lives as possible. I would go left, killing the Nazi, in the hope that this would save the life of the kind person.
In a blog post about ChatGPT I wanted to use the quote "It would be so nice if something would make sense for a change.", attributed to the book Alice's Adventures in Wonderland by Lewis Carrol.
So – in doing my due diligence and being curious about its context – I've been downloading several editions of the book and have found nothing like this quote in any of them. One of the most famous quotes from the book doesn't seem to appear in it.
Which is so incredibly apt for a blog post about ChatGPT. Anyway I'm still on the hunt for where that quote actually appears (my guess is one of the many Alice movies).
So the post I was originally working on at the start of this thread, where Alice in Wonderland was intended to play only a small part, is now published.
UX book authors through the ChatGPT looking-glass
Just a walkthrough of how easily I get books attributed to the wrong authors…
Technology is improving. Technology is becoming cheaper. The porn industry is growing in all manners. Everyone is an actor.
It’s now possible to generate pornographic films at a low cost starring any person on Instagram who has a number of photos published. Heaps of films with celebrities’ faces are available for as little as five dollars. Those who want a customized video with any person on Instagram with less than 2 million followers, a so-called “personal girl,” will pay $65.
Yes, it’s fake, but it’s also abuse. And actually, those faces really belong to someone. Desperate young women are now being made aware of nude pictures and videos with people who look just like them. It’s hard to imagine what this does to their stress levels, wellbeing, mental health and self-image – and what further consequences this may lead to.
While the techbros in Silicon Valley garner attention with their wild theories about computers taking over the world, people are already suffering now, today, for real from these technological leaps.
Media needs to shift focus away from the white men’s fictional fear of being oppressed and focus on those who already are.
I had a nice day at home with the dogs, and also I was able to listen to a lot of talks throughout the day for my boot-edition #AcademicRunPlaylist! (1/11)
First was an incredible talk by Irene Chen on bringing #equity to the #MachineLearning pipeline at the Hariri Institute for Computing. This talk has it all - how should people think about equity and fairness in machine learning, methods examining if #bias will be eliminated simply by collecting more data, and more. Highly recommend https://www.youtube.com/watch?v=hDoxJykUrw8&t=29s (2/11) #AIEthics#AI
My awful, fallen world prediction is that in the way that television companies went all in on Reality TV back in the last writers strike, they will go all in on a new overtly GPT-generated format that they will aggressively serve to audiences, even if the product does poorly, in order to establish a new norm and break writer power.
You may read a clear article on how #generativeAI functions, and why #chatBots lie: "What should we do if #LLMs aren’t compatible with #privacy legislation?"
"Information is lost as we move from training datasets to models. We cannot look at a [computed statistical relationship between a token and a given context] in a model and understand why it has the value it does because the informing data is not present."
This semester I had 7 students submit final research essays that were written by an AI chat bot.
Between the research I have to do to compile evidence, the Academic Integrity interview with the student, & the paperwork to be filed with the university, that's an additional 6 hours of my time per case that I do not get compensated for.
And I haven't had time to grade the papers written by the honest students.
This is f-king insane.
Calgon, take me away. #AIethics#plagiarism #teacher#professor
「 One worker’s conclusion: Bard was “a pathological liar,” according to screenshots of the internal discussion. Another called it “cringe-worthy.” One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash; another said it gave answers on scuba diving “which would likely result in serious injury or death.” 」
— Bloomberg
How to model sociotechnical aspects of computer and AI applications in ways conducive to assessing their fairness?
In this forthcoming FAccT paper, a collaboration with @vdignum and the Umeå Responsible AI group, we present a tool to help doing that: the ACROCPoLis framewok. https://arxiv.org/abs/2304.11217
The framework provides a sort of ontology to identify relevant entities to be considered when assessing the fairness of computer and AI applications. 1/4 #AIfairness#FAccTConference#aiethics
I am looking for resources discussing the responsibility/role of university researchers in developing ethical AI. Newspaper articles, scientific articles, in favor, against... all sources welcome as long as they are interesting and argumented.
We make a grave error when we say an algorithm “decided” anything.
No computer in the world has ever yet made a decision. The decision was made by the people who designed the system, marketed it, bought it, deployed it, or based actions on it.
Nobody has been denied credit or an apartment lease because an algorithm “decided” it should be so. A person decided. Every. Time.