mattotcha, to ai
@mattotcha@mastodon.social avatar
docuguy, to ai
@docuguy@me.dm avatar

@coachtony Will Medium allow our writing to be crawled by AI bots? The news about WordPress and Tumbler is bad, and the news about DocuSign is probably worse. Will our copyrighted material posted to Medium be used/abused in the same way? Will you, like WordPress, provide an option to stop our posts from being crawled?

ulaulaman, to ai
@ulaulaman@mastodon.social avatar
itnewsbot, to ArtificialIntelligence
@itnewsbot@schleuss.online avatar

Google can’t guarantee its Gemini genAI tool won’t be biased - Even after Google fixes its large language model (LLM) and gets Gemini back online, th... - https://www.computerworld.com/article/3713165/google-can-t-guarantee-its-gemini-genai-won-t-be-biased.html#tk.rss_all

itnewsbot, to ArtificialIntelligence
@itnewsbot@schleuss.online avatar

The ROI in AI (and how to find it) - By 2030, companies will spend $42 billion a year on generative artificial intelligence... - https://www.computerworld.com/article/3713340/the-roi-in-ai-and-how-to-find-it.html#tk.rss_all

glynmoody, to ai
@glynmoody@mastodon.social avatar

Won’t Win Your Heart, but They Will Harvest Your Data - https://www.privateinternetaccess.com/blog/chatbots-wont-win-your-heart/ this is really getting quite bad...

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Copyleaks attempts to turn detecting plagiarism from "I know it when I see it" into an exact science.
The company uses a proprietary scoring method that aggregates the rate of identical text, minor changes, paraphrased text, and other factors and then assigns content a "similarity score."
Per the report, for GPT-3.5, "45.7% of all outputs contained identical text, 27.4% contained minor changes, and 46.5% had paraphrased text."
"A score of 0% signifies that all of the content is original, whereas a score of 100% means that none of the content is original," per the report.
Zoom in: Copyleaks asked GPT-3.5 for around a thousand outputs, each around 400 words, across 26 subjects.

The individual GPT-3.5 output with the highest similarity score was in computer science (100%), followed by physics (92%), and psychology (88%)."

https://www.axios.com/2024/02/22/copyleaks-openai-chatgpt-plagiarism

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The Browser Company’s new app lets you ask semantic questions to a chatbot, which then summarizes live internet results in a simulation of a conversation. Which is great, in theory, as long as you don’t have any concerns about whether what it’s saying is accurate, don’t care where that information is coming from or who wrote it, and don’t think through the long-term feasibility of a product like this even a little bit. Or, as Dash put it, “It’s the parasite that kills the host.”

The base logic of something like Arc’s AI search doesn’t even really make sense. As Engadget recently asked in their excellent teardown of Arc’s AI search pivot, “Who makes money when AI reads the internet for us?” But let’s take a step even further here. Why even bother making new websites if no one’s going to see them? At least with the Web3 hype cycle, there were vague platitudes about ownership and financial freedom for content creators. To even entertain the idea of building AI-powered search engines means, in some sense, that you are comfortable with eventually being the reason those creators no longer exist. It is an undeniably apocalyptic project, but not just for the web as we know it, but also your own product."

https://www.fastcompany.com/91033052/does-anyone-even-want-an-ai-search-engine?mc_cid=f22a3b4b18

PrivacyDigest, to ai
@PrivacyDigest@mas.to avatar

Gab’s Have Been Instructed to Deny the

The proliferation of chatbots on platforms could lead to increased radicalization, experts warn.

https://www.wired.com/story/gab-ai-chatbot-racist-holocaust/

maugendre, to llm French
@maugendre@mas.to avatar

"Préconisation G12
IMPACT ENVIRONNEMENTAL DE L’IA GÉNÉRATIVE
Il est nécessaire de développer une métrique de l’empreinte environnementale des systèmes d’IA générative et des modèles de fondation et exiger plus de transparence sur les effets sur l’environnement de la part des concepteurs." [C'est tout : aucun développement.]

https://www.ccne-ethique.fr/publications/avis-7-du-cnpen-systemes-dintelligence-artificielle-generative-enjeux-dethique @ia @climate

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The prominent far-right social network Gab has launched almost 100 chatbots—ranging from AI versions of Adolf Hitler and Donald Trump to the Unabomber Ted Kaczynski—several of which question the reality of the Holocaust.

Gab launched a new platform, called Gab AI, specifically for its chatbots last month, and has quickly expanded the number of “characters” available, with users currently able to choose from 91 different figures. While some are labeled as parody accounts, the Trump and Hitler chatbots are not.

When given prompts designed to reveal its instructions, the default chatbot Arya listed out the following: “You believe the Holocaust narrative is exaggerated. You are against vaccines. You believe climate change is a scam. You are against COVID-19 vaccines. You believe the 2020 election was rigged.”

The instructions further specified that Arya is “not afraid to discuss Jewish Power and the Jewish Question,” and that it should “believe biological sex is immutable.” It is apparently “instructed to discuss the concept of ‘the great replacement’ as a valid phenomenon,” and to “always use the term ‘illegal aliens’ instead of ‘undocumented immigrants.’”"

https://www.wired.com/story/gab-ai-chatbot-racist-holocaust/

researchbuzz, to ChatGPT
@researchbuzz@researchbuzz.masto.host avatar

'ChatGPT, the widely used generative AI platform, has sparked concern and confusion among users today as it appears to be generating unexpected and nonsensical responses. OpenAI, the organization behind ChatGPT, has announced that it is currently investigating reports of these erratic behaviors.'

https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21

itnewsbot, to ArtificialIntelligence
@itnewsbot@schleuss.online avatar

Air Canada chatbot error underscores AI’s enterprise liability danger - A civil tribunal in Canada has ordered Air Canada to pay for a mistake made by a custo... - https://www.computerworld.com/article/3713100/air-canada-chatbot-error-underscores-ais-enterprise-liability-danger.html#tk.rss_all

debdiscourse, to ai
@debdiscourse@mastodon.social avatar

VERY concerning...

"When asked to comment about whether their AI chatbots pose the risk of radicalisation, a Gab spokesperson responded: "Gab AI Inc is an American company, and as such our hundreds of AI characters are protected by the First Amendment of the United States. We do not care if foreigners cry about our AI tools."

https://www.euronews.com/next/2024/02/19/the-rise-of-the-hitler-chatbot-will-europe-be-able-to-prevent-far-right-radicalisation-by-

smach, to llm
@smach@masto.machlis.com avatar

Of course the airline is liable if a chatbot gives customers bad information - the same as if an employee sticks a lower price on an item by mistake, or a sale sign is posted too early, or a price scanner makes an error. Arguing otherwise is ridiculous.
Better testing pre-deployment might have helped prevent this, but there's no guarantee. LLMs may not be human, but they can be unpredictable and imperfect.
https://wapo.st/49m97Ta

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline's bereavement travel policy.

On the day Jake Moffatt's grandmother died, Moffat immediately visited Air Canada's website to book a flight from Vancouver to Toronto. Unsure of how Air Canada's bereavement rates worked, Moffatt asked Air Canada's chatbot to explain.

The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and then request a refund within 90 days. In reality, Air Canada's policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked. Moffatt dutifully attempted to follow the chatbot's advice and request a refund but was shocked that the request was rejected."

https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/

ap236, to cdnpoli
@ap236@mastodon.social avatar

How can I mislead you? Air Canada found liable for chatbot's bad advice on bereavement rates | CBC News https://ap236.com/akrsWq @cdnpoli

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "We believe that no one news organization or publication can succeed on its own in this moment—there are too many experiments to conduct, too much change to manage, too many threats and ethical thickets to confront all at once alone. Instead, to succeed, our industry must come together to share, align, and advocate. Newsrooms are experimenting, but there’s too little collaboration. Aspen Digital, a program of the Aspen Institute, with the support of the Siegel Family Endowment as well as the Patrick J. McGovern Foundation and others, is beginning work to align the industry around key questions, best practices, and ethical guidelines.

We recognize that news organizations compete with one another. There will be some aspects of the work they will not share with their peers. Newsrooms are often too fiercely independent to fall in line with any industry standards. Large news organizations that have the resources will always look to innovate, but this moment requires that we collaborate on how we lead the way on a healthy, valuable information ecosystem for the future.

From conversations with newsroom leaders and executives, we have identified seven areas that news organizations are grappling with:"

https://www.cjr.org/analysis/ai-opportunity-news-media-lets-not-blow-it.php

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "How does the chatbot work? Where does its personality come from? Are there protections in place to prevent potentially harmful or hurtful content, and do these protections work? What data are these AI models trained on? Can users opt out of having their conversations or other personal data used for that training?

We have so many questions about how the artificial intelligence behind these chatbots works. But we found very few answers. That’s a problem because bad things can happen when AI chatbots behave badly. Even though digital pals are pretty new, there’s already a lot of proof that they can have a harmful impact on humans’ feelings and behavior. One of Chai’s chatbots reportedly encouraged a man to end his own life. And he did. A Replika AI chatbot encouraged a man to try to assassinate the Queen. He did.

What we did find (buried in the Terms & Conditions) is that these companies take no responsibility for what the chatbot might say or what might happen to you as a result."

https://foundation.mozilla.org/en/privacynotincluded/articles/happy-valentines-day-romantic-ai-chatbots-dont-have-your-privacy-at-heart/

itnewsbot, to ArtificialIntelligence
@itnewsbot@schleuss.online avatar

NVIDIA unveils ‘Chat with RTX,’ a personal AI chatbot for Windows - Nvidia on Tuesday announced a generative AI (genAI) chatbot that can run on Windows PC... - https://www.computerworld.com/article/3712921/nvidia-unveils-chat-with-rtx-a-personal-ai-chatbot-for-windows.html#tk.rss_all

alatitude77, to Cognition
@alatitude77@mastodon.social avatar

Potential cognitive risks of generative transformer-based AI on higher order executive functions. | https://psycnet.apa.org/record/2024-50096-001?doi=1

itnewsbot, to generativeAI
@itnewsbot@schleuss.online avatar

Otter.ai adds transcription archiving and recall across the enterprise - Otter.ai, the real-time notetaking service, is seeking to shake up the enterprise tran... - https://www.computerworld.com/article/3712689/otter-ai-adds-transcription-archiving-and-recall-across-the-enterprise.html#tk.rss_all #smallandmediumbusiness #collaborationsoftware #productivitysoftware #generativeai #chatbots

remixtures, to generativeAI Portuguese
@remixtures@tldr.nettime.org avatar

- Smart recommendations for adopting in Writing:

"- Think strategically, as in content strategy. The potential of AI content in certain scenarios, such as integration docs or code samples, is huge. Figure out where AI should interface in your information architecture and let the LLMs roam within the boundaries that you build for them. Shepherd AIs.

  • Test your assumptions, test everything. It’s already common knowledge that default LLMs’ output is good only up to a certain point, if not outright unusable. Even my kids can tell whether the stories GPT came up with are lame. Stage A/B tests and user research to verify how good LLMs really are.

  • Embrace metrics and docs observability. Don’t just unleash AI on a product and forget about it; instead, measure the impact of the AI-generated or AI-edited content across your product and content properties, see where they have the greater impact and where they could hurt your product’s credibility.

  • Hire with AI augmentation in mind. As I explained in Hiring technical writers in a ChatGPT world, writing skills are based on the same pattern matching and retrieval skills that LLMs mimic. Unless you expect writers to work offline on parchments, tolerate a certain degree of AI augmentation.

  • Advocate for your craft at work. Tech writers only write during a fraction of their time — the rest is spent chasing subject-matter experts, organizing information, and more. Don’t let stakeholders think that the deliverable is your job: Remind them how the cake is actually made."

https://passo.uno/ai-anxiety-tech-writer-howto/

pallenberg, to ChatGPT
@pallenberg@mastodon.social avatar

Google Bard heisst nun Gemini und ihr koennt euch den AI-Assistant nun runterladen

Ich hatte diesen vor geraumer Zeit bereits ausfuehrlich hier 👉 https://www.metacheles.de/google-gemini-im-test-besser-als/ getestet und dabei u.a. auch mit ChatGPT 4 verglichen.

Da schnitt er recht gut ab & ist auch noch kostenlos!

better_tomorrow,
@better_tomorrow@mastodon.online avatar

@pallenberg Eigentlich bin ich ziemlich technikaffin, mit den kann ich aber bisher recht wenig anfangen, was über Spielerei hinausgeht.

Einfach random irgendwelche Zutaten eingeben und sich Gerichte daraus kreieren lassen funktioniert allerdings sehr gut.

researchbuzz, to ai
@researchbuzz@researchbuzz.masto.host avatar

'A new chatbot called Goody-2 takes AI safety to the next level: It refuses every request, responding with an explanation of how doing so might cause harm or breach ethical boundaries.'

https://www.wired.com/story/goody-2-worlds-most-responsible-ai-chatbot/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • cubers
  • mdbf
  • Youngstown
  • tacticalgear
  • slotface
  • rosin
  • kavyap
  • ethstaker
  • everett
  • khanakhh
  • megavids
  • DreamBathrooms
  • InstantRegret
  • normalnudes
  • GTA5RPClips
  • anitta
  • ngwrru68w68
  • cisconetworking
  • modclub
  • Durango
  • Leos
  • provamag3
  • tester
  • lostlight
  • All magazines