Context of all of the above is the new sus af board members of Mastodon's new US-based non-profit: https://kolektiva.social/
Seriously, go read that thread.
I want to make it clear though: yes, I am disappointed by this, but I do not believe it's helpful to panic, start flamewars, get into spats, or call people names.
Instead, it's better to put that energy and time into building alternatives. The more feature-complete fedi instance software is out there, the healthier the whole network is.
The wonderful thing about fedi is that moving between instances (and between different software projects that power them) does not necessarily mean losing contact with your network.
As opposed to walled gardens, here we can in fact "vote with our feet".
His toot has been living rent-free in my head ever since.
I had ranted a few times before how "content" is a corporate-y way to devalue art. How "user-generated content" is a term designed to make it easier to deny the significance (not just monetary) of the amazing stuff people create online.
Contrasting this with "AI art" is jarring, and spot-on. 👀
"It's art only if it was auto-generated by a stochastic black box controlled by a multi billion dollar corporation; otherwise it's just sparkling content." 🧐
@rysiek@leadegroot it's all from the Bill Gates take on what would rule the Internet: content is king. Which is true when it comes to ad revenue... Just cranking out junk will make you more AdSense revenue, but it won't guarantee it's value. But all the money people focused on the content grind. YouTube changed the creators dashboard from "videos," to, "content." Netflix executives call their shows, "content." This wasn't exactly a calculated thing. It's basically just how they see art. They can not make art, but the can monetize content.
But suddenly there's ai... And now the think they are making something so they just ignorantly call it, "ai art." I think it's all part of their twisted psyche. They are patronizing to real artists because they see them as content cannons who create stuff for them to monetize... They do not see value in the art though... But if they generate stuff from a prompt, they think it's magical and valuable art because they "created" it.
Hey @patrick_breyer_mep as #EUVoice is going to be shutting down, perhaps it's time to add building and maintaining sovereign social media infrastructure to campaign promises for the upcoming #Europarl elections? :blobcateyes:
Or is that somewhere there already and I just missed it? 👀
> The EDPS pilot project of EU Voice and EU Video has proved that public bodies, like EUIs, can offer social media platforms that respect individuals’ fundamental rights
> Unfortunately, despite our efforts to find a new home for EU Voice and EU Video in other EUIs, we have been unable to secure new ownership to maintain the servers and sustain operations at the high standards that EUIs and our users deserve
> Anyone using this thing will have wet, soapy, slippery hands, possibly also soap in their eyes, and they might be trying to deal with scalding hot or ice-cold water. :blobcatthinking:
> Okay, let's make the taps as smooth and difficult to grab and operate by touch as humanly possible. Or more. :blobcatfingerguns:
@rysiek Hotel designers: let’s pair the impossible-to-use shower with three identical bottles, nailed to the wall, with their purpose (“shampoo”, “conditioner”) written in 10pt text.
Let’s also make it impossible to turn on the shower without being covered in cold water.
"Attackers", "malicious actors", or simply passive voice would have worked just as well. Or better.
And it would have the added benefit of not muddying the waters and not implying that a particular creative community is out to get everyone and their dog.
In other words, there is zero reason — none! — to drag hackers into this. Apart from clickbait, perhaps, but I wouldn't dare suggest that that's a consideration at a serious outlet like #Mashable!
I still think that "AI" systems (like Amazon's infamously racist and sexist recruitment system[1]) can be useful, just not for the stated purpose.
Such systems seem to amplify racism, sexism, etc, already present in the data.
Instead of a tool to evaluate people, maybe they should be considered a tool to evaluate the training data set — and, by proxy, the company culture that produced it. :blobcateyes:
After all, if your historical recruitment data is racist… 👀
Here's a half-formed thought I need to mull a bit on:
Somehow, algorithmic (and especially "AI-driven") decision making tends to only be proposed in contexts where it can only — or mostly — affect those with the least power in the system.
Migrants and asylum seekers.
Prisoners.
Families using any form of state support (child benefits, foodstamps, etc).
Palestinians in Gaza.
It somehow never gets proposed for use-cases where it might affect the wealthy and powerful.
There's a lot to take in here, but here's a kicker that really goes to the heart of this: there is no way for a person subject to SRAT to ever improve their score.
Every category of events to be recorded and taken into account by SRAT is negative.