If LLM continue to be the dominant branch of AI development what affects will they have on spoken language?

The ubiquity of audio commutation technologies, particularly telephone, radio, and TV, have had a significant affect on language. They further spread English around the world making it more accessible and more necessary for lower social and economic classes, they led to the blending of dialects and the death of some smaller regional dialects. They enabled the rapid adoption of new words and concepts.

How will LLMs affect language? Will they further cement English as the world’s dominant language or lead to the adoption of a new lingua franca? Will they be able to adapt to differences in dialects or will they force us to further consolidate how we speak? What about programming languages? Will the model best able to generate usable code determine what language or languages will be used in the future? Thoughts and beliefs generally follow language, at least on the social scale, how will LLM’s affects on language affect how we think and act? What we believe?

Paragone,

I figure they can either help or harm, depending on implementation:

Huggingface ( I always think of the “face-huggers” in Alien, when I see that name… and have NO idea why they thought that association would be a Good Thing™ ) has a LLM which apparently can do Sanskrit.

Consider, though:

All the Indigenous languages, where we’ve only actually got a partial-record of the language, and the “majority rule, minority extinguishes” “answer” of our normal process … obliterated all native speakers of that language ( partly through things like residential-schools, etc )…

now it becomes possible to have an LLM for that specific language, & to study the language, even though we’ve only got a piece of it.

This is like how we’ve sooo butchered the ecology that we can only study pieces of it, now, there’s simply too-much missing from what was there a few centuries ago, so we’re not looking at the origina/proper thing, either in ecologies or in languages.

sigh

This wasn’t supposed to be depressing.


Consider how search-engines have altered how we have to communicate…

In order to FORCE a search-engine to consider a pair-of-words to be a single-term, you have to remove all intervening space/hyphens/symbols from between them.

ClimatePunctuation is a single search-token, but “Climate Punctuation” is two separate, unrelated terms, which may or may-not appear in the results.

It’s obscene.

I’m almost mad-enough to want legislation forcing search-engines to respect some kind of standard set of defaults ( add more terms == narrowing the search, ie defaulting to Boolean AND, as one example ),

so they’d stop enshittifying our lives while “pretending” that they’re helping.

( there was a Science news site which would not permit narrowing-of-search, and I hope they fscking died.

Making search unusable on a science site??

probably some “charity” who pays most of their annual-budget to their administration, & only exists for their entitlement.

I’m saying that after having encountered that religion in charities. )


Interesting:

search-engines alter our use-of-language,

social-sites do too,

LLM’s do too,

marketing/propaganda does,

astroturfing does,

… it begins looking like real events are … rather-insignificant … influences in our languages?

Hm…

sbv,

The word arafed will enter the common lexicon.

Mr_Blott,

Is that a-rafed or ara-fed?

sbv,

Only the LLMs know.

late_night,
@late_night@sopuli.xyz avatar

We’ll never ever start a phrase with “Certainly…” anymore

lvxferre,
@lvxferre@mander.xyz avatar

[shameless ad] This sort of question fits well !linguistics [/shameless ad]

What causes the loss of a local variety (dialect or language) is not simply exposure to other varieties, but the loss of the identity associated with said variety. In other words, what led to the blending and death of those dialects wasn’t the audio communication technology - it’s economical, social, and ideological pressures, such as nationalism.

I’ll exemplify this using rhoticity in England. If telephone, radio and TV led to blending and death of dialects, you’d expect rhoticity in England to increase, due to exposure to American media. It didn’t - it’s decreasing:

https://mander.xyz/pictrs/image/b91339f3-c01d-44b0-b7b7-39fcea9e0155.png
Source for the map: it’s a collation of both maps in this article. The reason for the shift however becomes obvious when you look at identity matters: “you’re a Brit, speak like a Brit”.

The exact same reasoning applies to other languages, by the way. Caipira Portuguese features aren’t being replaced with the ones from that weird Globo TV accent, but with the ones spoken in São Paulo city; sheísmo in Argentina seems to be spreading, regardless of media from other countries; Occitan was not killed in France by simply exposing kids to French, but by making them feel ashamed of speaking Occitan.


With that out of the way, it’s hard to predict the future impact of machine text generation, be it through LLMs or better models. It’s perfectly possible that this sort of tech helps the preservation of local varieties, as LLMs are kind of good at translation; for example, I’ve noticed that Gemini is able to parse Venetian, even if unable to answer in the language.

Kethal,

Maybe they’ll help people sort out the difference between “affect” and “effect”.

Zarxrax,

On the contrary, AI will have been trained on so much bad grammar that it will become even more ingrained into society.

A_Very_Big_Fan, (edited )

Idk about that, I haven’t noticed any spelling mistakes so far (except that one post where they asked GPT3.5 to list the steps in counting the A’s in “mayonnaise” and counted like 4 of them)

The_Picard_Maneuver,
@The_Picard_Maneuver@lemmy.world avatar

It’ll be interesting to see how it affects the average person’s written communication. When we know technology can handle something for us, our brains seem to let it carry the load. Think of all the people who aren’t great communicators or might not be confident in their English who would love to rely on this already.

I guess it’s a matter of perspective whether you view it as a crutch or a boon, which I’m sure has been a conversation about many pieces of technology over the years:

People were better at remembering phone numbers before cell phones stored them. People were better at remembering how to spell words before spell check/autocorrect. People were better at writing by hand before typewriters/keyboards. etc

trolololol,

Each generation thinks they had it the right way and younger ones have it easy. You can go back centuries with people pushing each other down.

What should be encouraged is the exchange of ideas and healthy debate. Words are just a tool for that, and spelling and grammar and " not knowing Latin" are components of it.

A couple generations down the road we would be able to accurately transmit our thoughts to other people and calibrate for their culture and growing up biases, and the generation immediately before it will whine when LLM was the right way to communicate.

Icalasari,

Eh, LLMs do have a significant problem in how they can generate false information by themselves. Every other tool prior requires a person to make said false information, but LLMs can just generate it when asked a question

trolololol,

So what’s your point, should we trust machines less than the unhinged uncle at Thanksgiving?

ekky,

Jup, most definitely!

I’d much rather have just one unhinged uncle at St. Martin’s Day than having everybody come off as the unhinged uncle by lack of supervision of the LLMs talking in your place, making it seem like being unhinged is normal and thereby creating artificial peer pressure in a truly wicked exercise of laziness.

ekky,

People who rely too heavily on autocorrect do already now cause misunderstandings by writing something they did not intend to.

I had a friend during uni who was dyslectic, and while the words in his messages were written proper you still had to guess the context from the randomly thrown together words he presented you with.

Now that we can correct not only a single word or roughly the structure of a sentence, but instead fabricate whole paragraphs and articles by providing a single sentence, I imagine we will see a stark increase in low-quality content, accidental false information, and easily preventable misunderstandings - More than we already have.

Gradually_Adjusting,
@Gradually_Adjusting@lemmy.world avatar

I will share a journal entry from when I was mulling this over last December. Interested in your thoughts:

In old media, such as books and movies, we passively receive the media. We hear stories of heroes, songs about how the singer feels, written thoughts from inside another writer’s mind. These are valuable because of how we connect with others and thereby grow.

Interactive media, e.g. video games, allow us to tinker with a story and interrogate our relationship and attitude towards the ideas and themes thereby. We pull a lever, and the story changes direction. Video games have become such a large industry thanks to the more profound personal connection we can develop with the art through prescribed mechanical interactions. We press the buttons, and become the hero.

With the advent of artificial intelligence, it won’t be long before someone invents a new form of storytelling predicated on this technology. While we used to read stories, it now becomes possible for stories to be read into us. An AI can now be created that observes your life, and makes sense of it in a profound larger context.

This new media would be an AI companion who acts as a fourth wall of your life; layering your struggles and triumphs within a larger context, lightly editorializing, adding soundtracks that seamlessly portray your energy and emotional state (or humorously juxtapose it), adding humorous asides or callbacks that keep you in the moment, gently reminding and prompting next activities, reflecting on failures or calling attention to bad habits one is trying to break, and generally contriving to elevate the daily experience to the level of storytelling. It would give life an enhanced sense of meaningful examination, refining our sense of self and bringing our life into focus. This is a form of media that is not itself passively received, but actively treats your life as a fully interactive lived experience.

Art is integral to our ability to relate to others, experience things that are larger than ourselves, and to create meaning. This “fourth wall” AI would be a new form of media that seeks to amplify our understanding of ourselves, integrating our egos with our life as it exists as we change and grow throughout life.

The risks posed by malfeasant propagation of such a medium are at once beyond imagining and entirely predictable; the manufacturing of consent, the corrupting influence of profit motives, and the use of media as a social control mechanism are all pre-21st century concepts in media.

Whether a “fourth wall AI” represents a new threat or merely a quantum leap in the scale of preexisting threats cannot be known in advance. All of the above is to merely assert that we will see, and that such a medium could theoretically be used as art in the true sense, if such technology can be put in the hands of artists, and not just corporations.

MajorMajormajormajor,

This is both terrifying and fascinating at the same time. The potential in either direction is immense, imagine having a soundtrack to daily life tailored to what is happening? Would you hear boss music if you mess up at work/school/etc?

How is this viewed by the user is a good question as well. If it’s broadcast on a speaker/projector that everyone else can see what the ai is showing us as well. If it’s only viewable to us by either implants or some sort of smart glass technology then it’s “private” to the user.

Like you mention, the potential abuse of this system is unimaginable. Ads shown directly in our vision, a paid tier that is ad free. Music is shown to be able to effect emotional state easily (movies as an example), what if the soundtrack is used to emphasize certain goals. The ai pushes you to buy a certain car brand over another because said car brand paid the ai company more.

Gradually_Adjusting,
@Gradually_Adjusting@lemmy.world avatar

As McLuhan said, the medium is the message. If this is a story AI is telling about you to you, then it would probably best be a purely private experience happening in headphones or around the house. If this is a story where you are woven into the world as a character in other peoples’ lives, it should be happening around you. In cinematic storytelling we talk about whether something is “diegetic”; is the soundtrack coming from something in the world, or is it something only the audience can perceive as part of a constructed experience? If the goal of a “fourth wall AI” (should be in my opinion) to make your perception of yourself more fully unified with the world around you, I’d advocate for these things to be realized ‘diegetically’, so that multiple fourth-wall AI would have to work together to create harmony in the reality they construct for their audiences. I think on a more sociological level, I am worried about how much we each feel separate and different from one another in society. I think that having the fourth wall AI be strictly a public phenomenon would be a better choice not just from an artistic perspective, but from a sense of its potential for reinforcing our social fabric.

hellothere,

None, because it’s a dead end and will burn out in the next few years.

elshandra,

Do you actually believe this?

LLMs are the opposite of a dead end. More like the opening of a pipe. It’s not that they will burn out, it’s just that they’ll reach a point that they’re just one function of a more complete AI perhaps.

At the very least they tackle a very difficult problem, of communication between human and machine. Their purpose is that. We have to tell machines what to do, when to do it, and how to do it. With such precision that there is no room for error. LLMs are not tools to prove truth, or anything.

If you ask an LLM a question, and it gives you a response that indicates it has understood your question correctly, and you are able to understand its response that far, then the LLM has done it’s job, regardless of if the answer is correct.

Validating the facts of the response is another function again, which would employ LLMs as a translation tool.

It’s not a long leap from there to a language translation tool between humans, where an AI is an interpreter. deepl on roids.

lvxferre,
@lvxferre@mander.xyz avatar

My belief is that LLMs are a dead end that will eventually burn out, but because they’ll be replaced with better models. In other words machine text generation will outlive them, and OP’s concerns are mostly regarding machine text generation, not that specific technology.

hellothere, (edited )

Do you actually believe this?

Yes. I’m also very happy to be proven wrong in the years to come.

If you ask an LLM a question, and it gives you a response that indicates it has understood your question correctly, and you are able to understand its response that far, then the LLM has done it’s job, regardless of if the answer is correct

I don’t want to get too philosophical here, but you cannot detach understanding / comprehension from the accuracy of the reply, given how LLMs work.

An LLM, through its training data, establishes what an answer looks like based on similarity to what it’s been taught.

I’m simplifying here, but it’s like an actor in a medical drama. The actor is given a script that they repeat, that doesn’t mean they are a doctor. After a while the actor may be able to point out an inconsistency in the script because they remember that last time a character had X they needed Y. That doesn’t mean they are right, or wrong, nor does it make them a doctor, but they sound like they are.

This is the fundamental problem with LLMs. They don’t understand, and in generating replies they just repeat. It’s a step forward on what came before, that’s definitely true, but repetition is a dead end because it doesn’t hold up to application or interrogation.

The human-machine interface part, of being able to process natural language requests and then handing off those requests to other systems, operating in different ways, is the most likely evolution of LLMs. But generating the output themselves is where it will fail.

elshandra,

So I feel like we agree here. LLMs are a step to solving a low level human problem, i just don’t see that as a dead end… If we don’t take the steps, we’re still in the oceans. We’re also learning a lot in the process ourselves, and that experience will carry on.

I appreciate your analogy, I am well aware LLMs are just clever recursive conditional queries with big semi self-updating datasets.

Regardless of whether or not something replaces LLMs in the future, the data and processing that’s gone into that data, will likely be used along with the lessons were learning now. I think they’re a solid investment from any angle.

hellothere,

Regardless of whether or not something replaces LLMs in the future, the data and processing that’s gone into that data, will likely be used along with the lessons were learning now. I think they’re a solid investment from any angle.

I’m a big proponent of research for the sake of research, so I agree that lessons will be learnt.

But to go back to Ops original question, how will LLMs affect spoken language, they won’t.

elshandra,

But to go back to Ops original question, how will LLMs affect spoken language, they won’t.

That’s a rather closed minded conclusion. It makes it sound like you don’t think they have the chance.

LLMs have the potential to pave the way to aligning spoken language, perhaps even evolving human communication to a point where speech is an occasional thing because it’s really inefficient.

hellothere, (edited )

You’re putting the cart very much before the horse here.

For what you describe to happen requires global ubiquity. For ubiquity to happen, it must be something with sufficient utility that people from all walks of time, and in all contexts (ie not just professional) gain value from it.

For that to happen, given the interface is natural language, the LLM must work across languages to a very high level, which works against the idea that human language will adapt to it. To work across language to that level it must adapt to humans, not the other way around.

This is different to other technology which has come before - like post, or email - where a technical restriction in particular format/structure (eg postal or email address) was secondary to the main content (the message).

For LLMs to affect language you’re basically talking about human-to-human communication adopting “prompt engineering” characteristics. I just don’t see this happening on the scale you describe, human-to-human communication is wooly, imperfect, with large non-verbal elements, and while most people make do most of the time, we all broadly speaking suck at making points with perfect clarity and no misunderstanding.

For any LLM to be successful, it must be able to handle that, and being able to handle that dramactically reduces the likelihood of affecting change, because if change is required it won’t be successful.

It’s basically a tautology, is why it’s such a difficult thing, and why our current generation of models are supported mainly through hype and fomo.

Lastly, the closest example to a highly structured prompt that currently exists are programming languages. These are used by millions of people every day, and still developers do not talk to each other via their prefered language’s syntax of choice.

elshandra,

This is interesting and thought provoking discussion, ty.

You’re absolutely right, I was looking for the dead end - plugging LLM into a solution.

I’m more thinking LLMs used in conjunction with other tech will have these effects on our communicating. LLMs, or whatever replaces them to do that interpretation, are necessary to facilitate that.

When we come up with something better, to do the same job better, then of course, LLMs will be redundant. If that happens, great.

We are already seeing a boom in popularity of LLMs outside of professional use. Global ubiquity for anything is never going to happen, unless we can fix communication, which we probably can’t. We certainly can’t alone. It’s very much a chicken an egg problem, that we can only gain from by progressing towards.

Imagining vocallising using programming languages gave me a chuckle. I have been known to do things like use s/x/y/ to correct in written chats though.

Programming languages allow us to talk to and listen to machines. LLMs will hopefully allow machines to listen and talk to/between us.

elshandra, (edited )

I’m going to take the time to illustrate here, how I can see LLMs affecting human speech through existing applications and technologies that are (or could) be made both available and popular enough to achieve this. We’re far enough down the comment chain I can reply to myself now right?

So, we can all agree that people are increasingly using LLMs in the form of chatgpt and the like, to acquire knowledge/information. The same way as they would use a search engine to follow a link to that knowledge.

Speech-to-text has been a thing for at least 3 decades (yeah it was pretty hopeless once, but not so much now). So let’s not argue about speech vs text. People already talk to Google and siri and whoever else to this end, llms. Pale have their responses read out via tts.

I remember being blown away watching a blind sysadmin interacting with a Linux shell via tts at rates I couldn’t even understand the words in 1998. How far we’ve come. I digress, so.

We’ve all experienced trouble getting the information we’re looking for even with all these tools. Because there’s so much information, and it can be very difficult to find the needle in the haystack. So we constantly have to refine our queries either to be more specific, or exclude relationships to other information.

This in turn, causes us to think about the words we were using to get the results we want, more frequently because otherwise we spend too much time on recursion.

In turn, the more we do this, and are trained to do this, the more it will bleed into human communication.

Now look, there is absolutely a lot of hopium smoking going on here, but damn, this could have everlasting impact on verbal communication. If technology can train people - through inaccurate/incorrect results to think about the communication going out when they speak, we could drastically reduce the amount of miscommunication between people by that alone.

Imagine:

get me a chair

wheels out an office chair from the study

no I meant a chair for at the kitchen table

Vs

get me a chair for at the kitchen table

You can apply the same thing to human prompted image generation and video generation.

Now… We don’t need llms to do this, or know this. But we are never going to achieve this without a third party - the “llm”, and whatever it’s plugged into - because the human recipient will usually be more capable of translating these variances, or employ other contexts not as accessible via a single output as speech or text.

But if machines train us to communicate out better (more accurately, precisely and/or concisely), that is an effect I can’t welcome enough.

Realistically, the machines will learn to deal with us being dumb, before we adapt.

e: formatting.

hellothere,

My question is simple.

Given humans have not already achieved this clarity of communication, when we are social animals, have been utterly dependant on each other for the entire existence of our species, the importance of communication was literally a matter of life and death, and for the vast majority of that time we only communicated through speech (written word dates to approx 4k BCE)…then why would an LLM, or any human-machine interface for that matter achieve this as a side effect of usage?

I fully accept that people, everyone, can be trained in precise speech, but we aren’t talking about purposeful training here.

elshandra,

Let’s not argue about the potential of “any human-machine interface”, because nobody knows how far that can go. We have an idea, but there’s still way too much we don’t understand.

You’re right, humans never have and never will alone. It’s a long shot, and as I said is pretty unlikely because the models will just get better at compensating. But I imagine if people were interacting with llms regularly - vocally - they would soon get tired of extended conversations to get what they want, and repeat training in forming those questions to an llm would maybe in turn reflect in their human interactions.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • asklemmy@lemmy.world
  • DreamBathrooms
  • Durango
  • mdbf
  • magazineikmin
  • InstantRegret
  • rosin
  • modclub
  • Youngstown
  • slotface
  • thenastyranch
  • cubers
  • kavyap
  • everett
  • khanakhh
  • megavids
  • GTA5RPClips
  • osvaldo12
  • ngwrru68w68
  • normalnudes
  • cisconetworking
  • Leos
  • ethstaker
  • tester
  • tacticalgear
  • provamag3
  • anitta
  • JUstTest
  • lostlight
  • All magazines