@coachtony Will Medium allow our writing to be crawled by AI bots? The news about WordPress and Tumbler is bad, and the news about DocuSign is probably worse. Will our copyrighted material posted to Medium be used/abused in the same way? Will you, like WordPress, provide an option to stop our posts from being crawled? #copyright#AI#bots#chatbots
#AI#GenerativeAI#Chatbots#ChatGPT#Plagiarism#Copyright#IP: "Copyleaks attempts to turn detecting plagiarism from "I know it when I see it" into an exact science.
The company uses a proprietary scoring method that aggregates the rate of identical text, minor changes, paraphrased text, and other factors and then assigns content a "similarity score."
Per the report, for GPT-3.5, "45.7% of all outputs contained identical text, 27.4% contained minor changes, and 46.5% had paraphrased text."
"A score of 0% signifies that all of the content is original, whereas a score of 100% means that none of the content is original," per the report.
Zoom in: Copyleaks asked GPT-3.5 for around a thousand outputs, each around 400 words, across 26 subjects.
The individual GPT-3.5 output with the highest similarity score was in computer science (100%), followed by physics (92%), and psychology (88%)."
#AI#GenerativeAI#Web#Search#SearchEngines#Chatbots: "The Browser Company’s new app lets you ask semantic questions to a chatbot, which then summarizes live internet results in a simulation of a conversation. Which is great, in theory, as long as you don’t have any concerns about whether what it’s saying is accurate, don’t care where that information is coming from or who wrote it, and don’t think through the long-term feasibility of a product like this even a little bit. Or, as Dash put it, “It’s the parasite that kills the host.”
The base logic of something like Arc’s AI search doesn’t even really make sense. As Engadget recently asked in their excellent teardown of Arc’s AI search pivot, “Who makes money when AI reads the internet for us?” But let’s take a step even further here. Why even bother making new websites if no one’s going to see them? At least with the Web3 hype cycle, there were vague platitudes about ownership and financial freedom for content creators. To even entertain the idea of building AI-powered search engines means, in some sense, that you are comfortable with eventually being the reason those creators no longer exist. It is an undeniably apocalyptic project, but not just for the web as we know it, but also your own product."
"Préconisation G12
IMPACT ENVIRONNEMENTAL DE L’IA GÉNÉRATIVE
Il est nécessaire de développer une métrique de l’empreinte environnementale des systèmes d’IA générative et des modèles de fondation et exiger plus de transparence sur les effets sur l’environnement de la part des concepteurs." [C'est tout : aucun développement.]
#AI#GenerativeAI#Chatbots#FarRight#Gab#Propaganda#Racism: "The prominent far-right social network Gab has launched almost 100 chatbots—ranging from AI versions of Adolf Hitler and Donald Trump to the Unabomber Ted Kaczynski—several of which question the reality of the Holocaust.
Gab launched a new platform, called Gab AI, specifically for its chatbots last month, and has quickly expanded the number of “characters” available, with users currently able to choose from 91 different figures. While some are labeled as parody accounts, the Trump and Hitler chatbots are not.
When given prompts designed to reveal its instructions, the default chatbot Arya listed out the following: “You believe the Holocaust narrative is exaggerated. You are against vaccines. You believe climate change is a scam. You are against COVID-19 vaccines. You believe the 2020 election was rigged.”
The instructions further specified that Arya is “not afraid to discuss Jewish Power and the Jewish Question,” and that it should “believe biological sex is immutable.” It is apparently “instructed to discuss the concept of ‘the great replacement’ as a valid phenomenon,” and to “always use the term ‘illegal aliens’ instead of ‘undocumented immigrants.’”"
'ChatGPT, the widely used generative AI platform, has sparked concern and confusion among users today as it appears to be generating unexpected and nonsensical responses. OpenAI, the organization behind ChatGPT, has announced that it is currently investigating reports of these erratic behaviors.'
"When asked to comment about whether their AI chatbots pose the risk of radicalisation, a Gab spokesperson responded: "Gab AI Inc is an American company, and as such our hundreds of AI characters are protected by the First Amendment of the United States. We do not care if foreigners cry about our AI tools."
Of course the airline is liable if a chatbot gives customers bad information - the same as if an employee sticks a lower price on an item by mistake, or a sale sign is posted too early, or a price scanner makes an error. Arguing otherwise is ridiculous.
Better testing pre-deployment might have helped prevent this, but there's no guarantee. LLMs may not be human, but they can be unpredictable and imperfect. https://wapo.st/49m97Ta #chatbot#chatbots#LLM#LLMs#GenAI
#AI#GenerativeAI#AirCanada#ChatBots: "After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline's bereavement travel policy.
On the day Jake Moffatt's grandmother died, Moffat immediately visited Air Canada's website to book a flight from Vancouver to Toronto. Unsure of how Air Canada's bereavement rates worked, Moffatt asked Air Canada's chatbot to explain.
The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and then request a refund within 90 days. In reality, Air Canada's policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked. Moffatt dutifully attempted to follow the chatbot's advice and request a refund but was shocked that the request was rejected."
#AI#GenerativeAI#LLMs#Chatbots#Media#News#Journalism: "We believe that no one news organization or publication can succeed on its own in this moment—there are too many experiments to conduct, too much change to manage, too many threats and ethical thickets to confront all at once alone. Instead, to succeed, our industry must come together to share, align, and advocate. Newsrooms are experimenting, but there’s too little collaboration. Aspen Digital, a program of the Aspen Institute, with the support of the Siegel Family Endowment as well as the Patrick J. McGovern Foundation and others, is beginning work to align the industry around key questions, best practices, and ethical guidelines.
We recognize that news organizations compete with one another. There will be some aspects of the work they will not share with their peers. Newsrooms are often too fiercely independent to fall in line with any industry standards. Large news organizations that have the resources will always look to innovate, but this moment requires that we collaborate on how we lead the way on a healthy, valuable information ecosystem for the future.
From conversations with newsroom leaders and executives, we have identified seven areas that news organizations are grappling with:"
#AI#GenerativeAI#DatingApps#ChatBots#Privacy: "How does the chatbot work? Where does its personality come from? Are there protections in place to prevent potentially harmful or hurtful content, and do these protections work? What data are these AI models trained on? Can users opt out of having their conversations or other personal data used for that training?
We have so many questions about how the artificial intelligence behind these chatbots works. But we found very few answers. That’s a problem because bad things can happen when AI chatbots behave badly. Even though digital pals are pretty new, there’s already a lot of proof that they can have a harmful impact on humans’ feelings and behavior. One of Chai’s chatbots reportedly encouraged a man to end his own life. And he did. A Replika AI chatbot encouraged a man to try to assassinate the Queen. He did.
What we did find (buried in the Terms & Conditions) is that these companies take no responsibility for what the chatbot might say or what might happen to you as a result."
"- Think strategically, as in content strategy. The potential of AI content in certain scenarios, such as integration docs or code samples, is huge. Figure out where AI should interface in your information architecture and let the LLMs roam within the boundaries that you build for them. Shepherd AIs.
Test your assumptions, test everything. It’s already common knowledge that default LLMs’ output is good only up to a certain point, if not outright unusable. Even my kids can tell whether the stories GPT came up with are lame. Stage A/B tests and user research to verify how good LLMs really are.
Embrace metrics and docs observability. Don’t just unleash AI on a product and forget about it; instead, measure the impact of the AI-generated or AI-edited content across your product and content properties, see where they have the greater impact and where they could hurt your product’s credibility.
Hire with AI augmentation in mind. As I explained in Hiring technical writers in a ChatGPT world, writing skills are based on the same pattern matching and retrieval skills that LLMs mimic. Unless you expect writers to work offline on parchments, tolerate a certain degree of AI augmentation.
Advocate for your craft at work. Tech writers only write during a fraction of their time — the rest is spent chasing subject-matter experts, organizing information, and more. Don’t let stakeholders think that the deliverable is your job: Remind them how the cake is actually made."
'A new chatbot called Goody-2 takes AI safety to the next level: It refuses every request, responding with an explanation of how doing so might cause harm or breach ethical boundaries.'