This month's editorial looks at the possible ethics issues that could come with introducing a reliable detector to the filtering process of a submissions system. Curious what people think. https://clarkesworldmagazine.com/clarke_05_24/
I don't see an ethical issue because
1: AI content is not copyrightable
B: turnabout is fair play, and
Finally: boo hoo
However, I've read several reports that show detectors for AI content to have very high false positive and false negative ratings, and are particularly likely to report the writings of real humans who have autism as AI generated, sooo "reliable" is the hardest part of your statement. I'll still be interested in reading the editorial sometime though.
Like I've been saying for many, many years now: If a company can't operate while obeying the law then they shouldn't operate at all. No corporation has a right to exist, if Facebook can't run their business if forced to moderate their platform, then they should shut down, if OpenAI can't operate without violating copyright then they should shut down.
One of the best things I've ever bought is one of those magnetic tray things, not only can you chuck stuff in and they'll just stick but you can pick it up and sweep it across the floor for just such an occasion, they've really saved me a bunch of times.
I thought the exact same thing when I first saw that... But I do appreciate it for anime though, those intros go on for literally 2 to 3 minutes, no matter how good they are that's a lot to sit through for the seventh time.
Where does this idea come from that a "software developer" is just a "code monkey" but a "software engineer" has a wider focus? I'm seeing it a lot these days. It's a bullshit distinction, IME. I was a "software engineer" in the mid 90s. Haven't called myself that since.
A "software engineer" is someone who works at a company that calls their programmers "engineers".
A "software developer" is someone who works at a company that calls their programmers "developers".
Nvidia has trained an LLM on their codebase to save senior designers having to answer junior's questions about what some code does.
I don't know how many times i have to say this, but these are not search engines. They are not inference engines. Their answers are 100% "what would be a plausible sounding string of words to respond to that input?" that's it. That's all they do.
I would say that without any other qualifiers yes it refers to human hybrids but you could have dog cyborgs or horse cyborgs as well, but just saying e.g. "I saw a cyborg" means you're talking about a human one.
On a related note: there was a time right up through the early 2010s when I, an Irish national with an Irish name living in Ireland, couldn't use my actual/real/legal/whatever name on many Irish websites because they used some off the shelf American validation library that thought "oh no, names can only consist of one uppercase ASCII letter followed by three or more lower case ASCII letters". My last name starts with O' like a vast number of Irish people.
Oh for christ sake, not another one these companies that refuse to say what they actually sell and just drool marketing speak. Put aside the military aspect, and put aside the ai. Can anyone tell me one product they sell and what it does? "Defending democracy" is not a process, it's a goal. How do you actually do that? Their website sounds like "we'll do an ai to make your country gooder, pls gib munny".
Must-read! @pluralistic on why βOpenβ βAIβ isnβt.
βOpenwashing is the trick that large "AI" companies use to evade regulation and neutralizing critics, by casting themselves as forces of ethical capitalism, committed to the virtue of openness.β