@funnymonkey@freeradical.zone
@funnymonkey@freeradical.zone avatar

funnymonkey

@funnymonkey@freeradical.zone

Personal Acc't. Speaking only for myself. Privacy, Misinformation, AdTech, Education, Open Source, Content, and Standards. Education is a social justice issue.

#NoBot

This profile is from a federated server and may be incomplete. Browse more on the original instance.

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

OpenAI states it will need to cease operating if people expect it to follow basic ethical guidelines.

https://www.theverge.com/2023/5/25/23737116/openai-ai-regulation-eu-ai-act-cease-operating

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

If a corporate entity can render parts of your home useless, you don't have a "smart" home - you have a remote, inaccessible landlord.

Connected devices aren't "smart". Stop calling them that. Stop using the marketing language of the tech companies who are abusing you.

https://www.independent.co.uk/tech/smart-home-lock-out-amazon-b2358107.html

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

Loving how teachers, in an effort to "catch AI plagiarism", share student IP with multiple for profit sketchy "AI detectors" who can retain and use student work forever.

Teachers are feeding student work into honeypots collecting a new corpus of training data for the next generation of shit ML.

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

Today, the staffers at the Washington Post are staging a one day walkout to protest management's unwillingness to make progress in contract negotiations.

To support these professionals, do not visit WaPo today.

SwiftOnSecurity, to random

We really really need to evangelize active PII data purging as mandated risk-reduction.

https://infosec.exchange/@vcsjones/111717833173313022

funnymonkey,
@funnymonkey@freeradical.zone avatar

@SwiftOnSecurity Right now, like literally this very second, EdTech folks are making the case that they "need" to retain student data, including a range of sensitive information directly tied to PII or easily re-identifiable, in order to train AI models.

Of course, these models don't exist yet, and the goals of this work is vague to the point of laughable, but they still claim that they "need" to retain the data.

It's a mess.

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

March 2023: Microsoft fires AI ethics team responsible for, among other things, making sure AI products included safeguards for ethical use. https://arstechnica.com/tech-policy/2023/03/amid-bing-chat-controversy-microsoft-cut-an-ai-ethics-team-report-says/

January, 2024: people use Microsoft tools to create porn fakes of Taylor Swift. https://www.404media.co/ai-generated-taylor-swift-porn-twitter/

This is AI in action. AI is Big Tech, Big Tech is AI.

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

So, leading uses of LLM include non-consensual porn, improving cyberattacks, and political disinformation.

But sure - AI will transform education.

https://www.theverge.com/2024/2/14/24072706/microsoft-openai-cyberattack-tools-ai-chatgpt

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

The fact that, in the US, access to even mediocre health insurance is tied to our jobs forces people to make impossible choices: do I tolerate abusive behavior, unethical behavior, and potentially illegal behavior, or do I put the security and health of myself and my family at risk?

I have observed this more times than I can count in my career, including in my very recent past (but thankfully, not in my present! ).

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

Can we ban X, Facebook, Instagram, Doubleclick, Snapchat, and all adtech and location services companies and data brokers now?

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

The other day, I saw an account from Medium's Mastodon instance. This account was clearly AI generated, and wasn't doing a whole lot to obscure that reality.

I noticed links to accounts on other platforms. I was curious.

A thread.

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

This site is a great overview, description, and design showing how generative AI works. It's clear, accurate, and accessible.

https://knowingmachines.org/models-all-the-way

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

FFS. People are really talking about "AI literacy" in education.

Want to teach "AI literacy"? Teach history that includes an honest look at systemic racism. Teach intersectionality.

Teach statistics. Teach logic. Teach literature. Teach drawing.

There is no such thing as "AI literacy". There is the logic of appropriation, and theft, and predictions based on what has been taken.

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

Question for folks here: what are examples of "good" security or privacy advice that are shared frequently, but are difficult for people to implement?

Example: don't click on links in emails/mouse over links to see where they point.

This is technically useful and accurate, but very difficult/impractical to do on a phone - and in some cases, the title for a link shown on mouseover can differ from the actual link!

Any other examples like this? Please share!

funnymonkey, to random
@funnymonkey@freeradical.zone avatar
funnymonkey, to random
@funnymonkey@freeradical.zone avatar

The different names applied to phishiung variants (ie, smishing, quishing) do more harm than good because they introduce more inaccessible terminology into a space where people are already intimidated.

Just say "phishing", and then clarify that phisihing attachs can occur via multiple ways, including text or using a QR code.

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

People using tech often have no idea of how it works, or how well it works, or when it will give garbage results.

ALWAYS assume that any tech will be used in ways that weren't intended or imagined.

And yes - the excuse of "shouldn't we try everything in the name of safety" really doesn't fly, ever. It's right up there with "it's for the kids".

https://www.wired.com/story/parabon-nanolabs-dna-face-models-police-facial-recognition/

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

All of these things are true:

ActivityPub is an open standard.

Anyone can, and should, use an open standard. That's the point!

Facebook has a well established track record of abusing data, and of building platforms that facilitate abuse, racism, and - in some cases - genocide.

A bad actor can abuse an open standard, and when we see the potential for that, proactive measures to avoid harm are a good step.

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

The 23andMe data shitshow (which is waiting to happen with every direct to consumer DNA service) highlights that we are only as secure as our least secure contact.

https://techcrunch.com/2024/01/03/23andme-tells-victims-its-their-fault-that-their-data-was-breached/

funnymonkey,
@funnymonkey@freeradical.zone avatar

If someone gave you a DNA kit for the holidays, give yourself the gift of privacy and throw it straight into the trash.

funnymonkey, to random
@funnymonkey@freeradical.zone avatar
funnymonkey, to random
@funnymonkey@freeradical.zone avatar

The fascist cult tendencies baked into Silicon Valley should really stop us all in our tracks.

https://newrepublic.com/article/180487/balaji-srinivasan-network-state-plutocrat

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

Hello, good people!

Looking for recommendations for an ethical domain registrar - strong record of support for net neutrality, supports encryption, doesn't have racist/sexist skeletons in their closet, don't have new owners jacking up prices on domain registrations and renewals.

Anybody have a registrar they love?

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

So, yeah. Facebook has been using that EXACT line and argument going back years, over multiple abuses.

https://www.funnymonkey.com/2024/05/advertisers-should-not-send-sensitive-information-about-people-through-our-business-tools-puhleeze/

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

Grilled apricots. They will be finished with ricotta cheese seasoned with lime zest, salt, and pepper.

Apricots on the heat.

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

I'm surprised it took two days. Really, once the template is in place, this could probably be a half-day job.

https://www.wsj.com/politics/how-i-built-an-ai-powered-self-running-propaganda-machine-for-105-e9888705?st=pm7346bk558pnfa

funnymonkey, to random
@funnymonkey@freeradical.zone avatar

Earlier today, I was drafting a longer piece of writing going into why EdTech continues to fail to live up to its promise, and how the AI hype is another chapter in a long and messy book.

(the tl;dr version: the people controlling funding, and the people getting funding, are largely the same cohort, and don't understand how the tech or education work)

But I stopped, for a couple reasons:

1/x

funnymonkey,
@funnymonkey@freeradical.zone avatar

And to be clear: when researchers and edtech folks talk about training AI models for education, they are talking about taking data from kids in the unfounded hope that if they collect enough data they (the researcher/edtech person) might be able to spot a pattern that Changes Everything (tm)

Please, don't buy this line. It's the veneer that people use to justify an ongoing data grab.

"For the children" - of course, because it always is, except the kids get nothing and the adults get rich.

5/x

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • slotface
  • ngwrru68w68
  • everett
  • mdbf
  • modclub
  • rosin
  • khanakhh
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • Youngstown
  • GTA5RPClips
  • InstantRegret
  • provamag3
  • kavyap
  • ethstaker
  • osvaldo12
  • normalnudes
  • tacticalgear
  • cisconetworking
  • cubers
  • Durango
  • Leos
  • anitta
  • tester
  • megavids
  • lostlight
  • All magazines