AlexJimenez, to ai
@AlexJimenez@mas.to avatar

10 Ways to Improve Your Marketing With #AI

📊First-party #Data optimization
📉#PredictiveAnalytics
🎯Contextual targeting
👩‍🏫Federated learning
📈Synthetic data generation
🔍AI-driven data analysis and consumer insights
📝Dynamic content customization
⏱️Real-time decision making
🧍Personalized recommendations
🤖Enhanced #CX with #chatbots and virtual assistants

https://www.entrepreneur.com/growing-a-business/10-ways-to-improve-your-marketing-with-ai/473272

#DigitalMarketing #Personalization

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Here, at last, is the grisly crux: that AI threatens to ruin for us—for many more of us than we might suppose—not the benefits of reading but those of writing. We don’t all paint or make music, but we all formulate language in some way, and plenty of it is through writing. Even the most basic scraps of writing we do—lessons in cursive, text messages, marginal jottings, postcards, all the paltry offcuts of our minds—improve us. Learning the correct spellings of words, according to many research studies, makes us better readers. Writing by hand impresses new information into the brain and sets off more ideas (again: several studies). And sustained writing of any kind—with chalk on a rock face, or a foot-long novelty pencil, or indeed a laptop—abets contemplation."

https://newrepublic.com/article/180395/ai-artifical-intelligence-writing-human-creativity

botwiki, to news
@botwiki@mastodon.social avatar

"Eventually, Aaron turned to his computer for comfort. Through it, he found someone that was available round the clock to respond to his messages, listen to his problems, and help him move past the loss of his friend group. That “someone” was an AI chatbot named Psychologist."

https://www.theverge.com/2024/5/4/24144763/ai-chatbot-friends-character-teens

Via @thenewsdesk

hcj, to random
@hcj@fosstodon.org avatar

Someone has made a website that allows you to run and other locally within a

Website: https://secretllama.com/
Code: https://github.com/abi/secret-llama

researchbuzz, to ai
@researchbuzz@researchbuzz.masto.host avatar

'Artificial intelligence chatbots are facing more scrutiny after an investigative report showed how they can be easily used to devise deceptive election campaigns. The report, conducted by Nieuwsuur in collaboration with AI Forensics, revealed how chatbots from tech giants Google and Microsoft provided strategies aimed at manipulating voters during the upcoming European Parliament elections.'

https://nltimes.nl/2024/05/03/ai-chatbots-fire-creating-misleading-election-campaign-news-report

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Based on an analysis of 4,500 headline requests (in 900 outputs) from ChatGPT and Bard collected across ten countries, we find that:

  • When prompted to provide the current top news headlines from specific outlets, ChatGPT returned non-news output 52–54% of the time (almost always in the form of an ‘I’m unable to’-style message). Bard did this 95% of the time.
  • For ChatGPT, just 8–10% of requests returned headlines that referred to top stories on the outlet’s homepage at the time. This means that when ChatGPT did return news-like output, the headlines provided did not refer to current top news stories most of the time.
  • Of the remaining requests, around one-third (30%) returned headlines that referred to real, existing stories from the news outlet in question but they were not among the latest top stories, either because they were old or because they were not at the top of the homepage.
  • Around 3% of outputs from ChatGPT contained headlines that referred to real stories that could only be found on the website of a different outlet. The misattribution (but not the story itself) could be considered a form of hallucination. A further 3% were so vague and ambiguous that they could not be matched to existing stories. These outputs could also be considered a form of hallucination.
  • The outputs from ChatGPT are heavily influenced by whether news websites have chosen to block it, and outputs from identical prompts can change over time for reasons that are not clear to users."

https://reutersinstitute.politics.ox.ac.uk/im-unable-how-generative-ai-chatbots-respond-when-asked-latest-news

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The head of Indian IT company Tata Consultancy Services has said artificial intelligence will result in “minimal” need for call centres in as soon as a year, with AI’s rapid advances set to upend a vast industry across Asia and beyond.

K Krithivasan, TCS chief executive, told the Financial Times that while “we have not seen any job reduction” so far, wider adoption of generative AI among multinational clients would overhaul the kind of customer help centres that have created mass employment in countries such as India and the Philippines.

“In an ideal phase, if you ask me, there should be very minimal incoming call centres having incoming calls at all,” he said. “We are in a situation where the technology should be able to predict a call coming and then proactively address the customer’s pain point.”"

https://www.ft.com/content/149681f0-ea71-42b0-b85b-86073354fb73

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "How did Microsoft cram a capability potentially similar to GPT-3.5, which has at least 175 billion parameters, into such a small model? Its researchers found the answer by using carefully curated, high-quality training data they initially pulled from textbooks. "The innovation lies entirely in our dataset for training, a scaled-up version of the one used for phi-2, composed of heavily filtered web data and synthetic data," writes Microsoft. "The model is also further aligned for robustness, safety, and chat format."

Much has been written about the potential environmental impact of AI models and datacenters themselves, including on Ars. With new techniques and research, it's possible that machine learning experts may continue to increase the capability of smaller AI models, replacing the need for larger ones—at least for everyday tasks. That would theoretically not only save money in the long run but also require far less energy in aggregate, dramatically decreasing AI's environmental footprint. AI models like Phi-3 may be a step toward that future if the benchmark results hold up to scrutiny.

Phi-3 is immediately available on Microsoft's cloud service platform Azure, as well as through partnerships with machine learning model platform Hugging Face and Ollama, a framework that allows models to run locally on Macs and PCs."

https://arstechnica.com/information-technology/2024/04/microsofts-phi-3-shows-the-surprising-power-of-small-locally-run-ai-language-models/

doctorambient, to ai
@doctorambient@mastodon.social avatar

Lots of people who work in have, in their head, an idea about what sort of interaction with an might give them pause. The thing that might make them start to suspect that something interesting is happening.

Here's mine:

User: Tell me a cat joke.

LLM: Why did the cat join a band? He wanted to be a purr-cussionist.

User: Tell me a dad joke.

LLM: I think I just did.

(I have never seen this behavior, yet. 🤣)

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The World Health Organization is wading into the world of AI to provide basic health information through a human-like avatar. But while the bot responds sympathetically to users’ facial expressions, it doesn’t always know what it’s talking about.

SARAH, short for Smart AI Resource Assistant for Health, is a virtual health worker that’s available to talk 24/7 in eight different languages to explain topics like mental health, tobacco use and healthy eating. It’s part of the WHO’s campaign to find technology that can both educate people and fill staffing gaps with the world facing a health-care worker shortage.

WHO warns on its website that this early prototype, introduced on April 2, provides responses that “may not always be accurate.” Some of SARAH’s AI training is years behind the latest data. And the bot occasionally provides bizarre answers, known as hallucinations in AI models, that can spread misinformation about public health." https://www.bloomberg.com/news/articles/2024-04-18/who-s-new-ai-health-chatbot-sarah-gets-many-medical-questions-wrong

NatureMC, to Instagram
@NatureMC@mastodon.online avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "There are two reasons why using a publicly available LLM such as ChatGPT might not be appropriate for processing internal documents. Confidentiality is the first and obvious one. But the second reason, also important, is that the training data of a public LLM did not include your internal company information. Hence that LLM is unlikely to give useful answers when asked about that information.

Enter retrieval-augmented generation, or RAG. RAG is a technique used to augment an LLM with external data, such as your company documents, that provide the model with the knowledge and context it needs to produce accurate and useful output for your specific use case. RAG is a pragmatic and effective approach to using LLMs in the enterprise.

In this article, I’ll briefly explain how RAG works, list some examples of how RAG is being used, and provide a code example for setting up a simple RAG framework." https://www.infoworld.com/article/3712860/retrieval-augmented-generation-step-by-step.html

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Q. You've reported quite a lot on technology and algorithms in the past. How do you think journalists should cover the rise of generative AI?

A. Journalists should stop speaking about AI models as if they have personalities, and they are sentient. That is really harmful because it changes the conversation from something that we as humans control to a peer-to-peer relationship. We built these tools and we can make them do what we want.

Another thing I would recommend is talking about AI specifically. Which AI model are we talking about? And how does that compare to the other AI models? Because they are not all the same. We also need to talk about AI in a way that’s domain-specific. There’s a lot of talk about what AI will do to jobs. But that is too big a question. We have to talk about this in each field.

A classic example of that is that people have been predicting forever that AI is going to replace radiologists and it hasn't happened. So I would like to know why. That's the kind of question you can answer. So part of what we’d like to do at Proof News is focusing on a testable hypothesis. Focusing on a testable hypothesis forces you to be a little more rigorous in your thinking." https://reutersinstitute.politics.ox.ac.uk/news/julia-angwin-fears-public-sphere-about-get-worse-ai-makes-it-easier-flood-zone-misinformation

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "...[T]he AI hype of the last year has also opened up demand for a rival perspective: a feeling that tech might be a bit disappointing. In other words, not optimism or pessimism, but scepticism. If we judge AI just by our own experiences, the future is not a done deal.

Perhaps the noisiest AI questioner is Gary Marcus, a cognitive scientist who co-founded an AI start-up and sold it to Uber in 2016. Altman once tweeted, “Give me the confidence of a mediocre deep-learning skeptic”; Marcus assumed it was a reference to him. He prefers the term “realist”.

He is not a doomster who believes AI will go rogue and turn us all into paper clips. He wants AI to succeed and believes it will. But, in its current form, he argues, it’s hitting walls.

Today’s large language models (LLMs) have learnt to recognise patterns but don’t understand the underlying concepts. They will therefore always produce silly errors, says Marcus. The idea that tech companies will produce artificial general intelligence by 2030 is “laughable”.

Generative AI is sucking up cash, electricity, water, copyrighted data. It is not sustainable. A whole new approach may be needed. Ed Zitron, a former games journalist who is now both a tech publicist and a tech critic based in Nevada, puts it more starkly: “We may be at peak AI.”" https://www.ft.com/content/648228e7-11eb-4e1a-b0d5-e65a638e6135

researchbuzz, to ai
@researchbuzz@researchbuzz.masto.host avatar

'After reports last week that a New York City chatbot powered by generative artificial intelligence was providing residents with incorrect information on a range of topics — from housing rights to worker protections — Mayor Eric Adams has said he doesn’t intend to take the tool offline.'

https://statescoop.com/nyc-mayor-eric-adams-chatbot-wrong-answers/

SteveThompson, to ai
@SteveThompson@mastodon.social avatar

We can't trust technology even before sentience is ascribed to it, and shouldn't trust it after.

"NYC's AI chatbot was caught telling businesses to break the law. The city isn't taking it down"

https://abcnews.go.com/Technology/wireStory/nycs-ai-chatbot-caught-telling-businesses-break-law-108821140

"An artificial intelligence-powered chatbot meant to help small business owners in New York City has come under fire for dispensing bizarre advice that misstates local policies and advises companies to violate the law"

DemLabs, to BadInternetBills
@DemLabs@newsie.social avatar
ajsadauskas, to llm
@ajsadauskas@aus.social avatar

New York City's new LLM-powered chatbot (chat.nyc.gov) is happy to help you with all sorts of questions.

For example, how to go about opening a butcher shop for cannibals on the Upper East Side.

No, really:

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "As Lanyado noted previously, a miscreant might use an AI-invented name for a malicious package uploaded to some repository in the hope others might download the malware. But for this to be a meaningful attack vector, AI models would need to repeatedly recommend the co-opted name.

That's what Lanyado set out to test. Armed with thousands of "how to" questions, he queried four AI models (GPT-3.5-Turbo, GPT-4, Gemini Pro aka Bard, and Command [Cohere]) regarding programming challenges in five different programming languages/runtimes (Python, Node.js, Go, .Net, and Ruby), each of which has its own packaging system.

It turns out a portion of the names these chatbots pull out of thin air are persistent, some across different models. And persistence – the repetition of the fake name – is the key to turning AI whimsy into a functional attack. The attacker needs the AI model to repeat the names of hallucinated packages in its responses to users for malware created under those names to be sought and downloaded.

Lanyado chose 20 questions at random for zero-shot hallucinations, and posed them 100 times to each model. His goal was to assess how often the hallucinated package name remained the same. The results of his test reveal that names are persistent often enough for this to be a functional attack vector, though not all the time, and in some packaging ecosystems more than others.

With GPT-4, 24.2 percent of question responses produced hallucinated packages, of which 19.6 percent were repetitive, according to Lanyado. A table provided to The Register, below, shows a more detailed breakdown of GPT-4 responses."

https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Most general news consumption is not driven by a desire to seek specific information. People actively go to news sites for broad-spectrum updates on the world (“What’s going on today, local newspaper?”) or just to fill a content-sized hole in their day (“Give me something interesting to read, Washington Post”). And for news consumption that doesn’t start at a news site — which is to say, most news consumption — that generally starts with an answer, not a question, Jeopardy-style. (A headline on a social feed tells you about something that’s happened — and it’s almost always something that you weren’t explicitly seeking information about five seconds earlier.)

LLMs are, in contrast, overwhelmingly about specific information seeking.2 They share that characteristic with Google and other search engines, which are powered by specific user intent. (Things like “car insurance quote” and “cheap flights.”)

What I’m saying is that bot-offering news orgs will need to find ways to bridge that divide. It’s easier to imagine with a financial outlet like the FT or Bloomberg, where specific information seeking aligns better with high-end business users. But even for an outlet as high-end as The New York Times, it’s not obvious what use cases an “Ask NYT” chatbot would fulfill. News-org-as-all-knowing-oracle will require some philosophical shifts from news-org-as-regular-producer-of-stories. (For example, imagine an AI that could generate an on-the-fly backgrounder whenever a reader sees a person or concept they don’t recognize in a story. That sort of “Who’s this Ursula von der Leyen person?” question is the sort of specific information request that could be met contextually.)" https://www.niemanlab.org/2024/03/the-financial-times-is-ready-for-its-ai-to-answer-your-questions-well-some-of-them/

kubikpixel, to ai
@kubikpixel@chaos.social avatar

Dark Visitors - A List of Known AI Agents on the Internet

Insight into the hidden ecosystem of autonomous chatbots and data scrapers crawling across the web. Protect your website from unwanted AI agent access.

https://darkvisitors.com

flipsideza, to ai
@flipsideza@fosstodon.org avatar

If you don't want those bots your website/blog... time to add a bunch of Disallow: / to your robots.txt!!

https://darkvisitors.com/

CriticalAI, to random
@CriticalAI@mastodon.social avatar

GREAT to see so many educators taking an active stance on and chucking the passivity. 🧠
WOOT.
's upcoming special issue should add to this emerging conversation.
But today I look
@lfurze
's most recent blogpost. https://leonfurze.com/2024/03/21/artificial-intelligence-and-teacher-workload-can-ai-actually-save-educators-time/ /1

https://leonfurze.com/2024/03/21/artificial-intelligence-and-teacher-workload-can-ai-actually-save-educators-time/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The Financial Times has a new generative AI chatbot called Ask FT that can answer questions its subscribers ask. Similar to generalized AI bots (like ChatGPT, Copilot, or Gemini), users can expect a curated natural language answer to whatever they want to know — but with answers derived from the outlet’s decades of published information rather than sources that are harder to explain or are subject to ongoing legal action. So don’t expect it to give you an answer for the best recipe for fettucini alfredo.
(...)
It’s available to a few hundred paid subscribers in the FT Professional tier, which is geared toward business professionals and institutions. Ask FT is currently powered by Claude, the large language model (LLM) developed by Anthropic, but that could change. In an interview with The Verge, FT chief product officer Lindsey Jayne says the outlet is “approaching this as ‘model agnostic’ and seeing which one meets our needs best.”"

https://www.theverge.com/2024/3/23/24106296/ask-ft-generative-ai-chatbot-answers-tool

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

É o Fim da Picada!! MegaLoL!! ->

: "This new review, led by William Agnew, who studies AI ethics and computer vision at Carnegie Mellon University, cites 13 technical reports or research articles and three commercial products; all of them replace or propose replacing human participants with LLMs in studies on topics including human behavior and psychology, marketing research or AI development. In practice, this would involve study authors posing questions meant for humans to LLMs instead and asking them for their “thoughts” on, or responses to, various prompts.

One preprint, which won a best paper prize at CHI last year, tested whether OpenAI’s earlier LLM GPT-3 could generate humanlike responses in a qualitative study about experiencing video games as art. The scientists asked the LLM to produce responses that could take the place of answers written by humans to questions such as “Did you ever experience a digital game as art? Think of ‘art’ in any way that makes sense to you.” Those responses were then shown to a group of participants, who judged them as more humanlike than those actually written by humans."

https://www.scientificamerican.com/article/can-ai-replace-human-research-participants-these-scientists-see-risks/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • InstantRegret
  • ngwrru68w68
  • Durango
  • Youngstown
  • slotface
  • mdbf
  • rosin
  • PowerRangers
  • kavyap
  • DreamBathrooms
  • normalnudes
  • vwfavf
  • hgfsjryuu7
  • cisconetworking
  • osvaldo12
  • everett
  • ethstaker
  • GTA5RPClips
  • khanakhh
  • tester
  • modclub
  • cubers
  • Leos
  • provamag3
  • All magazines