Another looong day! But luckily my dog hung out with me outside between calls and I was able to listen to some talks for my #AcademicRunPlaylist! (1/7)
Next was an interesting talk by @Prakharg on enhancing #LLM reliability at the @allen_ai. This talk gets into using natural language as an explicit tool for designing dialogue behavior, and while I'm not sure if this is the best interface for LLMs I could see this being used for explanation in a variety of contexts https://www.youtube.com/watch?v=KBGYXxQFgmY (3/7)
They talked about “neural networks,” “machine learning,” mentioned “encoder-decoder” architectures and “Transformers” for language prediction but did not join the AI circus. I respect that.
Tim Cook: “What people are now talking about are these Large Language Models. I think they have great promise. I do think that it’s so important to be very deliberate and very thoughtful in the development and the deployment of these because they can be so powerful that you worry about things like bias, things like misinformation, maybe worse in some cases.”
Is there anyone serious who is saying this? Or is this just another way to make the tech seem more powerful than it is?
I don't get this "we're all gonna die" thing at all.
I do get the "we are too disorganized and greedy to integrate new technology well without the economy getting screwed up and people suffering... but that's another matter..."
It does occur to me that the smartest people on the planet are rarely the richest or the most powerful. Quite the reverse in fact.
I would imagine the same issues would face a general AI.
I think the best strategy for world domination is not to develop super human intelligence, but to be dumb and lie consistently and plausibly. The LLM's are well advanced in that.
Of course, the very best strategy for vast riches is to be both dumb and lucky.
a thing i don’t get is what is new. i mean, computers have long been much, much “smarter” than humans in, for example, their ability to perform arithmetic, or to remember things. recent AI tools are interesting for sure, but what superior competence of theirs makes these new systems so threatening, compared to older superior competences?
@interfluidity hype and human 'hallucination' is what makes this seem a much bigger step than it is.
It's the first time computers have seemed human in how we interact.
People are projecting human qualities into them because they don't understand what LLMs actually are. So people make invalid assumptions, and VCs and corporations are exploiting this, deliberately anthropomorphizing #LLM's with terms like "hallucinate" instead of "make false statements".
LLM stands for Large Language Model, the type of autocomplete-style generative algorithms such as ChatGPT often incorrectly referred to as "AI" or "artificial intelligence"
Anyway, back on topic: simple things, like naming characters and places, are fairly minor decisions and can easily be checked against existing IP by a quick Google search (no Theodore Pendlewick exists in fiction, but several companies named Arcane Agenda exist in reality). The side-effects of #LLMs, like emulating its training data to the point of copyright infringement, can be easily kept in check at that scale, and retracing how exactly it arrived at the output is largely irrelevant.
But when you generate entire audience-facing content that way, you can't check for infringement anymore, and when you let it make major decisions, you can't hold it accountable – neither for malpractice, nor for reinforcing biases contained in its training data.
I don't think anyone is saying every and all use of such models is bad (although there is an argument to be made for shutting the lid on Pandora's box because harmful use outweighs harmless use), but you have to be aware of its limits, and the way it works (again, it's basically like your phone's predictive keyboard, just trained on way more data, none of which is yours) before you can safely and responsibly employ it.
Oh and by the way, #feditips: if you want to tag a hashtag but need to append a plural s without altering the tag, as I did with #LLMs above: if your client supports inputting HTML entities (as glitch-soc does, in both Markdown and HTML modes), you can use a ​ entity between the hashtag name and the s.
Lot of conversation around the political risks of #AI. Lots talk about disinfo. Some around data privacy. But I think the biggest political risk right now is these tools convincing us to do visibly stupid things. That professor who accused his class of cheating — that lawyer who filed fake citations — these are smart people! Political staff need to get hands-on familiarity with these tools to understand the benefits and limits, or else we too are prime for something consequentially embarrassing.
I really would like to avoid the situation where some staffer asks some #LLM to dig up an old quote or legislative record or financial history or other oppo-research and they run with a bunch of campaign comms on a thing that never happened.
...source community's democratization approach of #LLM's now outpacing even #Open-AI's development leaps. (See first link) This is the evolutionary booster predicted at the end of the last century as "humanity has evolved into a socio-technological complex."
I just got the weirdest CV cover letter ever for a #python developer role, mixing Adrian Newey (the Adrian Newey of F1 fame), Hitler (that Hitler) and epigenetics (wtf epigenetics has to do with software engineering??). Quickest rejection ever
We shouldn't be worrying about AI wiping out humanity. That's a smokescreen. That's sci-fi. We need to worry about the individuals now and in the near future who can be hurt by the premature deployment of generative AI systems that spew wrong answers and lies, and then when asked for confirmation, lie about their own lies! And just popping up warnings to users is useless, because you know and I know that hardly anyone will read those warnings or pay any attention to them whatsoever.
The warnings are also cold comfort to the loved ones of the guy the vigilante gang beat to death because some generated content somewhere labelled him an unrepentant pedophile in their neighbourhood.
I don't know what the solution is. Mandate that an indicator of its origin be prominently displayed within the generated text? How could that work?
@arstechnica '... he asked the AI tool whether Varghese is a real case. ChatGPT answered that it "is a real case" ... When asked if the other cases provided by ChatGPT are fake, it answered, "No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw." '
If accurate, to me this illustrates the core of the issue with many people's use of so-called #AI tools: this lawyer bought the hype that these #LLM are verging on human-like intelligence and could understand and answer his question, while in reality they were only constructing a statistically probable sequence of words that might be written in response to such a query, with no relationship to the facts of the matter.
I'm downloading Descript as I write this. My first intro to text as the interface to audio editing (such a profound and wonderful idea!) was courtesy of Laurian Gridinoc years ago, I've been eagerly awaiting more general availability.
I enjoyed the podcast and your conversation on Descript for synchronized text+audio editing.
Reading the very clean transcript including a few ‘disfluencies’ — like ‘um’ — made me think of this ‘About Language’ podcast: http://becauselanguage.com/76-ooo-yum-uh/
TLDR ‘Um’s are helpful markers to signal pauses while thinking. Host Daniel Midgley decided to leave them in transcripts to keep a natural tone. Some #LLM’s pick this up as well.
"It’s increasingly looking like this may be one of the most hilariously inappropriate applications of AI that we’ve seen yet." I am riveted by the extensive documentation of how ChatGPT-powered Bing is now completely unhinged. @simon has chronicled it beautifully here: https://simonwillison.net/2023/Feb/15/bing/
...operator."
This is a #PR fiasco the colonel caused.
Some times, the original story pans out.
Besides, now that an #LLM can be set up on a gaming #PC for as little as 100 USD, and gamers being what they are, I'm sure we will not have to wait another year for corroboration, alas.
"If you google around enough, you’ll find people talking about using LangChain to chain together LLM calls and get better outputs. However, chaining calls to an LLM just makes the latency problem with LLMs worse, which is a nonstarter for us. But even if it wasn’t, we have the potential to get bitten by compound probabilities."
➥ @honeycombio
In my opininion here is the real power of current-level #LLM’s: Declarative #programming via close-to-natural language type prompts.
It’s not just productivity, so many more people can get into programming now! No need to understand loops, branches, conditions! It’s all doable by natural language.
You only need to dabble a bit in states and data storage structures. Just wonderful!
It’s hard to believe the marketing momentum behind #AI when auto-correct is so fucking bad, and Instagram pushes bullshit at me if I pause for a second on a post. No, Instagram, my mistaken like on a post doesn’t mean that’s all I want to see for the next week. #llm#AutoCorrect