@ErikJonker Het blijft niet bij "lezen", een AI trainen betekent het maken van afgeleide werken. Dat iets online staat wil niet zeggen dat het publiek domein is. Mensen zetten dingen online binnen een context met een bepaald doel. Het "Grab all you can" waarmee BigTech zijn datasets nu vult negeert dit volledig. #AI#BigTech#MoveFastAndBreakThings#dataset#ethiek
They don't need people to fill out forms anymore, #BigTech can ask for the bare minimum with complete confidence that the first thing we're going to do after we confirm our email is enter as much #marketing and #demographics#data about ourselves as we can cram into the character limit of the bio field (and possibly add extra identifier emojis to our usernames for good measure)
Showing how many non-federated, proprietary, privacy-invasive applications use #XMPP behind the scenes is not the hot take you think it is, unless that's the kind of vision you have for the #Fediverse' future, too.
Folks, apparently a Norwegian instance (snabelen.no) won’t be “associating with” me because of my position on not welcoming Meta and surveillance capitalists to the fediverse (https://mastodon.ar.al/@aral/110621068046632124)
Please feel free to consider whether you want to associate with them accordingly:
Proudly proclaiming that you won’t associate with people who take a principled stance against surveillance capitalism and toxic Silicon Valley people farmers like Facebook says so much more about you than it does about them.
As much as I do want to see #BigTech to be forced out of #FLOSS or at the very least be forced to pay maintainers appropriately, we all know this ain't happen unless they're forced to do so as "lesser evil"...
#IBM acquiring #RedHat and killing off #RHEL-compatible Distros with #grsecurity-Style Assholeism is a prime sample.
So OFC you can use said license - just as I use #GPLv3 to commit #AssetDenial on my own projects...
Rather than making another 10-post-long thread 🧵 on the latest things #Project92, I had mercy on everyone & just wrote 1 blog post…
(TLdr: the sky isn't falling, we have always been a diverse set of cultures and moderation standards, we will work it out on both fronts, but a lot is changing on all fronts & more will. And through it all we have all we need to protect our people.) Open to any thoughts, pushback, etc….
As a cultural #anthropologist, I appreciate your approach to seeing the #Fediverse as having diverse cultures. I really appreciate the even-handed tone and thoughtfulness your bring to these discussions. I am someone who came to Mastodon to get FAR, FAR, FAR away from #BigTech. I do not regret the move. I’m a newbie who is not (yet) wedded to any particular #fediculture. I find everything happening here fascinating.
I wouldn’t mind a trend to more female-identifying fedizens. 😜
All this conversation about #Meta on #Fedi feels like the worst parts of geek culture. So technical, without understanding context or what strikes can actually do. My thoughts:
Meta will make a great app for Fedi because it has more money to throw at the task. People will start using that because it's better. It will have QTs and an algorithm. People they want to follow will be there.
Key point of 'knowledge' produced by #generativeAI : it can be 'wrong' in the sense of 'wrong information,' but it's always 'right' insofar as it represents the correct functioning of the tech it comes from (treating algorithmic processing as a form of reasoning).
A logician might say: everything #ChatGPT yields is logically valid even if it counts as a false conclusion (an inaccurate output). There's a sleight-of-hand by which 'logically valid' appears as 'correct,' especially when... (1/n)
the logically valid output comes from a technology that's promoted as the height of innovation, mentally superior to humans, more objective than humans, etc.
#bigtech loves it when people confuse "logically valid" for "accurate."
In fact, at its core, any protocol – no matter how inefficient – that assumes hierarchy where one group is ruled over by another inherently favours centralisation. And Big Tech can always throw money at the problem as well as “optimise” protocols to improve economies of scale.
The alternative is #SmallWeb/p2p, where every node is equal.
@anderseknert all the more reason not to use copilot.
Let’s boycot #bigtech by using local products. I would rather #selfhost or pay someone in my vecinity to provide a service for me.
If there are no options like that I just don’t use the product. We don’t need the latest and gratest if it it means we give these companies more power to abuse us with.
"The CEO of OpenAI, Sam Altman, has repeatedly spoken of the need for global AI regulation.
But behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company, according to documents about OpenAI’s engagement with E.U. officials obtained by TIME from the European Commission via freedom of information requests." #ai https://time.com/6288245/openai-eu-lobbying-ai-act/
One of Whatsapp's most baffling features is the "Backup to Google Drive" thing.
Does google pay them to fill up your storage with all those stupid forwarded gifs and fearmongering misinformation screengrabs that fill up your phone at such a phenomenal rate, to force you to buy more storage?
Do people actually use their instant message platform as some sort of Important Communications Archive? Despite there being no search facility, meaning that anything more than a month old is pretty much lost forever, backups or not?
Sorry if I have been unclear:
It is best to avoid software and internet #BigTech companies like the plague. As far as possible without significant losses in efficiency with regard to your peers.
If I were using #WhatsApp, a momentary, interim storage to the #Google cloud might be acceptable one-time exit strategy.
If you're worried about #LLM-based #AI, you're focusing on the wrong thing and may lack imagination.
This, and related developments (I can't view them as advancements, knowing how this all will end) are what's going to end the human race as we've known ourselves.
Much good can be derived from technologies like this, but we—being as we are—will ultimately go much to far.
I don't think we're prepared for our instant evolution (and, separately, eventual mechanization).
Catch #ClassOf09 if you can for a good preview, imo, of some of the very real and highly probable #societal problems that #LLM's, #ML, and #AI#algorithms will definitely cause or make exponentially worse, which #BigTech is actively claiming AI will improve or solve.
#Power never yields power and whatever power we grant to #technology will eventually and deliberately be used against us by that technology.
The #bigTech companies got so used to making profit on the content generated by it's users for free, they don't want to give nothing back!
I definitely support #Canada and it's intention to force #Google and #Meta to pay for the news articles shared on their platforms.
But I'm not sure how effective is the strategy for #socialMedia, because those platforms became algorithm-driven and brainless a while back, and they can do without any meaningful content.
#Brasil#BigTech#Google#Meta#Lobbying#FakeNews: "O Google e a Meta - controladora do Facebook, Whatsapp e Instagram – lideraram uma operação de pressão e lobby para derrubar o Projeto de Lei 2630, o PL das Fake News, da pauta do Congresso brasileiro. Ao longo de 14 dias, as empresas e outras big techs atuaram fortemente para deputados se posicionarem contra a proposta, com ameaças de retirar conteúdo das redes sociais e disseminação de uma campanha de ataques às contas deles na internet.
Um monitoramento do Estadão revelou que a pressão das empresas fez com que pelo menos 33 deputados mudassem de posicionamento entre a aprovação do requerimento de urgência, dia 19 de abril, e a retirada de pauta, em 2 de maio."
⎧ “Overnight, workers’ professionalism has been disregarded in favor of ambiguous attendance tracking practices tied to our performance evaluations,” Chris Schmidt, a software engineer at Google and member of the grassroots Alphabet Workers Union, told CNN in a statement. “The practical application of this new policy will be needless confusion amongst workers and a disregard for our various life circumstances.” ⎭
#EU#Amazon#Monoposony#BigTech#DigitalMarketsAct: "For Europe’s 800k sellers who rely on Amazon to reach their customers, the monoposony conditions are blatant and shameless. Take listing fees: Amazon’s “flywheel” pitch claims that as the company grows, it achieves “economies of scale” that can lower its cost basis. But Amazon’s listing fees haven’t changed, even as the company experienced explosive growth in the EU (remember, sellers whose Amazon fees exceed their margins have to pass those fees onto buyers, and also raise their prices everywhere else to satisfy the Most Favored Nation requirement).
Amazon books the revenues from these fees — and other junk-fees it extracts from sellers — in Luxembourg, an EU member nation that provides a tax haven to multinational businesses that want to maintain the fiction that they operate their businesses out of the tiny kingdom. There is sharp competition in the EU to offer the most servile, corrupt environment for multinationals, and Luxembourg is a leader, along with Cyprus, Malta and, of course, Ireland:"
#AI#GenerativeAI#BigTech#Regulation#AIDoomsterism: "Tech firms must formulate industry standards for responsible development of AI systems and tools, and undertake rigorous safety testing before products are released. They should submit data in full to independent regulatory bodies that are able to verify them, much as drug companies must submit clinical-trial data to medical authorities before drugs can go on sale.
For that to happen, governments must establish appropriate legal and regulatory frameworks, as well as applying laws that already exist. Earlier this month, the European Parliament approved the AI Act, which would regulate AI applications in the European Union according to their potential risk — banning police use of live facial-recognition technology in public spaces, for example. There are further hurdles for the bill to clear before it becomes law in EU member states and there are questions about the lack of detail on how it will be enforced, but it could help to set global standards on AI systems.
Further consultations about AI risks and regulations, such as the forthcoming UK summit, must invite a diverse list of attendees that includes researchers who study the harms of AI and representatives from communities that have been or are at particular risk of being harmed by the technology."
This instance alone is big enough to be useful for Zuck, even with full defederation, if people remain they'll be harvested, if people leave Zuck wins too because his main objective is to disrupt the fediverse if he can't control it.
Raising doubts about #fediverse prominent figures and projects is always a win for #BigTech, even better when they hand themselves on a silver platter.
#AI#GenerativeAI#LLMs#OpenSource#BigTech: "To be clear, examples like BLOOM and GPT-J are still far from the proverbial “start-up in a garage,” and were not developed for deployments comparable to other commercial models and their benchmarks. Big Tech, and large, well-capitalized companies more generally, still have advantages.
But the extent of that advantage depends on a key question: even if larger companies can build the highest performing models, will a variety of entities still be able to create models that are good enough for the vast majority of deployed use cases? Bigger might always be better; however, it’s also possible that the models that smaller entities can develop will suit the needs of consumers (whether individuals or companies) well enough, and be more affordable. Segments of the market and different groups of users may operate differently. There may be some sets of use cases in which competition is strongly a function of relative model quality, while in other instances competition depends on reaching some threshold level of model quality, and then differentiation occurs through other non-AI factors (like marketing and sales). Users might in many cases need outputs that reach a given quality threshold, without necessarily being best in class; or, a model might serve a subset of users at very high quality levels and thus be sufficient even if it doesn’t hit performance benchmarks that matter to others."