SmallOther, to random
@SmallOther@techhub.social avatar

I might republish Computer Power and Human Reason by Joseph . I found the copy right holder and have a plan.

It is just as relevant today as it ever was. It's been out of print for 51 years and the inside cover has a quote from Chomsky saying that the book will continue to be read in 50 years.

Weizenbaum_Institut, to random German
@Weizenbaum_Institut@social.bund.de avatar

Wer war Joseph und wieso ist das Werk des -Kritikers heute noch so relevant? Unser brandneues Audio-Feature 📻 Weizenbaum auf der Spur 📻 diskutiert sein Leben und Erbe mit Wegbegleitern und Expertinnen. Jetzt in die erste Folge reinhören!
🎧 https://www.weizenbaum-institut.de/w-100/weizenbaum-auf-der-spur/

Weizenbaum_Institut, to random German
@Weizenbaum_Institut@social.bund.de avatar

Heute wäre Joseph 101 Jahre alt geworden! Damit geht zwar unser Jubiläumsjahr W/100 zu Ende, doch die Gesellschaftskritik des bekannten Computerpioniers ist heute aktueller denn je. Wir werden weiterhin dafür sorgen, dass seine wichtigen Fragestellungen noch viele Diskussionen rund um Technologie, KI und Gesellschaft bereichern.
Alle Aktivitäten des Jahres finden sich hier: https://www.weizenbaum-institut.de/w-100/

topio, to random German
@topio@mastodon.social avatar

Warum die Forderung nach digitaler Mündigkeit eine neoliberale Unmöglichkeit ist, zeigt die gestern ausgezeichnete Masterarbeit von Mareike Lisker (TU Berlin) --

:patcat: Damit können wir eigentlich unsere Arbeit bei @topio und das Buch von @reticuleena (Digitale Mündigkeit) in die Tonne schmeißen.

Auf Seite 7 wird Leena auch Zitiert.

Hier gehts zur Arbeit (PDF)
https://ccs.chaostreff-flensburg.de/arbeiten/arbeit-2.pdf

Rainer_Rehak, (edited ) to random German
@Rainer_Rehak@mastodon.bits-und-baeume.org avatar

Mareike Lisker (TU Berlin) erhielt soeben den -- für ihre zum Thema der Unmöglichkeit digitaler Mündigkeit in den gegenwärtigen digitalen Verhältnissen. Es findet nämlich tatsächlich eine neoliberale statt, der das Individuum strukturell gar nicht gerecht werden kann. Volltext: https://fahrplan.2023.fiffkon.de/fiffkon23/talk/L7KVDY/

fiff_de, to ai German
@fiff_de@mastodon.bits-und-baeume.org avatar

FIfFKon23 vom 3.-5.11.

Die Herausforderungen der Informatik werden nur allzu oft hinsichtlich der Gefahren, Risiken, unerfüllbaren Verheißungen und Dystopien der Digitalisierung betrachtet. Doch wie kann das FIfF dazu beitragen, eine kritische Informatik konstruktiv mit einer positiven Ausrichtung zu vereinen?

Aufgrund Eurer Rückmeldungen haben wir drei Schwerpunkte ausgewählt:

Cyberpeace - z. B. Kritik an der Planung zum Future Combat Air System (FCAS) und zur militärischen Nutzung der Künstlichen Intelligenz

Information und Nachhaltigkeit - Bits & Bäume - z. B. Informationstechnik und Entwicklungspolitik

Entwicklungen der Künstlichen Intelligenz - z. B. Chancen und Risiken der Entwicklung und Nutzung von ChatGPT sowie Auswirkungen auf die IT-Sicherheit

https://blog.fiff.de/fiffkon-2023/

Weizenbaum_Institut, to random German
@Weizenbaum_Institut@social.bund.de avatar
Weizenbaum_Institut, to random German
@Weizenbaum_Institut@social.bund.de avatar
fiff_de, (edited ) to random German
@fiff_de@mastodon.bits-und-baeume.org avatar

Neumitglieder Onboarding Oktober

https://blog.fiff.de/neumitglieder-onboarding-im-oktober/

Seit Anfang letzten Jahres kommen wir alle drei Monate am dritten
Mittwoch im Monat um 19 Uhr per Videokonferenz zusammen, um neue Mitglieder persönlich zu begrüßen. Auch langjährige Mitglieder sind eingeladen sich gegenseitig und neue Mitglieder kennenzulernen, sich auszutauschen und Fragen zu stellen!

‌Im Anschluss gibt es ab 20 Uhr mit fließendem Übergang im gleichen Video-Konferenz-Raum den "intergalaktischen Stammtisch" zu dem alle – auch interessierte Nichtmitglieder – herzlich eingeladen sind.‌‌‌

tagesschau, to random German
@tagesschau@ard.social avatar

Sind Chatbots geeignet für die Psychotherapie?

Die Zahl psychischer Erkrankungen in Deutschland steigt - die Wartelisten für Therapieplätze sind lang. Können KI-Chatbots, die Betroffene beraten, hier helfen? Und was sind die Risiken? Von Lara Kubotsch.

➡️ https://www.tagesschau.de/wissen/forschung/ki-psychotherapie-100.html?at_medium=mastodon&at_campaign=tagesschau.de

Weizenbaum_Institut,
@Weizenbaum_Institut@social.bund.de avatar

@tagesschau Joseph hatte schon 1976 eine Antwort.
Mehr: https://www.weizenbaum-institut.de/w-100/

nunesgh, to ai
@nunesgh@mastodon.social avatar

AI Criticism Has a Decades-Long History.

"Paris Marx is joined by Ben Tarnoff to discuss the #ELIZA chatbot created by Joseph #Weizenbaum in the 1960s and how it led him to develop a critical perspective on #AI and computing that deserves more attention during this wave of AI hype."

https://www.techwontsave.us/episode/182_ai_criticism_has_a_decades_long_history_w_ben_tarnoff

wikinaut, to random German
@wikinaut@berlin.social avatar

Paneldiskussion vor dem Screening des Films
„Weizenbaum. Rebel at Work.“

@Weizenbaum_Institut

100 Jahre Joseph

Weizenbaum_Institut, to ChatGPT German
@Weizenbaum_Institut@social.bund.de avatar

Was würde Joseph heute über sagen? Nächste Woche zeigen wir den Film "Weizenbaum. Rebel at Work." im Hofkino Berlin und sprechen mit den Regisseuren, sowie Gunna Wendt, einer seiner Wegbegleiterinnen.
🎟️ Kostenlose Tickets ➡️ https://www.weizenbaum-institut.de/events/weizenbaum-filmnacht-rebel-at-work/

skarthik, to ai
@skarthik@neuromatch.social avatar

A must read longform on the pioneer of AI chatbot who became AI's main and earliest detractors, Joseph Weizenbaum (of ELIZA fame here at MIT) by Ben Tarnoff!

Weizenbaum's was a lone voice in the 1970s, and many of his views are as apt now as they were then. Almost the entire article is chock full of quotes, and so here is a thread with quotes and a few of my thoughts interspersed.

https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai

"There is so much in Weizenbaum’s thinking that is urgently relevant now. Perhaps his most fundamental heresy was the belief that the computer revolution, which Weizenbaum not only lived through but centrally participated in, was actually a counter-revolution."

I too had my early training in computer science, and have come to see it increasingly as counter-revolutionary not even delivering on its promises of economic productivity (topic for another day). With my expertise in neural networks, I should be riding atop the current wave of AI to the bank, instead, I mostly see it as a land of false promises mediated by dangerous levels of silicon valley mythmaking.

1/8

skarthik,
@skarthik@neuromatch.social avatar

Weizenbaum vs what he termed the "Artificial Intellgentsia":

"Minsky was bullish and provocative; one of his favourite gambits was to declare the human brain nothing but a “meat machine” whose functions could be reproduced, or even surpassed, by human-made machines. Weizenbaum disliked him from the start... Weizenbaum’s trouble with Minsky, and with the AI community as a whole, came down to a fundamental disagreement about the nature of the human condition."

"... he [Weizenbaum] argued that no computer could ever fully understand a human being. Then he went one step further: no human being could ever fully understand another human being. Everyone is formed by a unique collection of life experiences that we carry around with us, he argued, and this inheritance places limits on our ability to comprehend one another. We can use language to communicate, but the same words conjure different associations for different people – and some things can’t be communicated at all. “There is an ultimate privacy about each of us that absolutely precludes full communication of any of our ideas to the universe outside ourselves,” Weizenbaum wrote."

2/8

skarthik,
@skarthik@neuromatch.social avatar

Weizenbaum and what he saw as necessary political action:

"Weizenbaum supported the action [MIT students protests of 1969] and became strongly affected by the political dynamism of the time. “It wasn’t until the merger of the civil rights movement, the war in Vietnam, and MIT’s role in weapons development that I became critical,” he later explained in an interview. “And once I started thinking along those lines, I couldn’t stop.”

"... MIT was receiving more money from the Pentagon than any other university in the country. Its labs pursued a number of projects designed for Vietnam... Project MAC – under whose auspices Weizenbaum had created Eliza – had been funded since its inception by the Pentagon... wrestled with this complicity, he found that his colleagues, for the most part, didn’t care about the purposes to which their research might be put. If we don’t do it, they told him, somebody else will. Or: scientists don’t make policy, leave that to the politicians. Weizenbaum was again reminded of the scientists in Nazi Germany who insisted that their work had nothing to do with politics... Consumed by a sense of responsibility, Weizenbaum dedicated himself to the anti-war movement... Where possible, he used his status at MIT to undermine the university’s opposition to student activism. After students occupied the president’s office in 1970, Weizenbaum served on the disciplinary committee. According to his daughter Miriam, he insisted on a strict adherence to due process, thereby dragging out the proceedings as long as possible so that students could graduate with their degrees."

3/8

skarthik,
@skarthik@neuromatch.social avatar

What differentiates humans from machines, the question at the heart of "human intelligence" and Weizenbaum's:

"In 1976, Weizenbaum published his magnum opus: Computer Power and Human Reason: From Judgment to Calculation... The book is indeed overwhelming. It is a chaotic barrage of often brilliant thoughts about computers. A glimpse at the index reveals the range of Weizenbaum’s interlocutors: not only colleagues like Minsky and McCarthy but the political philosopher Hannah Arendt, the critical theorist Max Horkheimer, and the experimental playwright Eugène Ionesco."

"The book has two major arguments. First: there is a difference between man and machine. Second: there are certain tasks which computers ought not be made to do, independent of whether computers can be made to do them. The book’s subtitle – From Judgment to Calculation – offers a clue as to how these two statements fit together."

4/8

skarthik,
@skarthik@neuromatch.social avatar

Values, decisions, judgements: are they fit for optimization?

"For Weizenbaum, judgment involves choices that are guided by values. These values are acquired through the course of our life experience and are necessarily qualitative: they cannot be captured in code. Calculation, by contrast, is quantitative. It uses a technical calculus to arrive at a decision. Computers are only capable of calculation, not judgment. This is because they are not human, which is to say, they do not have a human history – they were not born to mothers, they did not have a childhood, they do not inhabit human bodies or possess a human psyche with a human unconscious – and so do not have the basis from which to form values."

"(It would be a “monstrous obscenity”, Weizenbaum wrote, to let a computer perform the functions of a judge in a legal setting or a psychiatrist in a clinical one.)"

This is as succinct an argument to make for why we should not be making value judgements that humans are "meat machines" or "biological automata" or just a "bag of neurons" with a body attached, or the urge to suggest that we are neither special or unique since every trait that a human possesses is now replicable by a machine. The one word is VALUE and that's quite a loaded word!

5/8

skarthik,
@skarthik@neuromatch.social avatar

Human agency, the ability to make history, and usher progress:

"Just as the bomber pilot “is not responsible for burned children because he never sees their village”, Weizenbaum wrote, software afforded generals and executives a comparable degree of psychological distance from the suffering they caused."

"... Bound by an algorithmic logic, software lacked the flexibility and the freedom of human judgment. This helps explain the conservative impulse at the heart of computation. Historically, the computer arrived “just in time”, Weizenbaum wrote. But in time for what? “In time to save – and save very nearly intact, indeed, to entrench and stabilise – social and political structures that otherwise might have been either radically renovated or allowed to totter under the demands that were sure to be made on them.” "

This is an absolutely wonderful synthesis of status quo, the relegation of agency and willful denial of responsibility by replacing humans with technologies and that has only intensified.

As the author notes later:
"Weizenbaum was always less concerned by AI as a technology than by AI as an ideology – that is, in the belief that a computer can and should be made to do everything that a human being can do. This ideology is alive and well. It may even be stronger than it was in Weizenbaum’s day."

What is this "ideology"? It goes something like what the political philosopher Slavoj Žižek wrote: “they know it, but they are doing it anyway”, which he riffs from Marx who wrote of the capitalists, "they do not know it, but they are doing it". Marx himself was riffing on Jesus (from Luke):"Father, forgive them; for they do not know what they're doing".

6/8

skarthik,
@skarthik@neuromatch.social avatar

He seems to have been on the mark with respect to climate change:

"The later Weizenbaum was increasingly pessimistic about the future, much more so than he had been in the 1970s. Climate change terrified him. Still, he held out hope for the possibility of radical change. As he put it in a January 2008 article for Süddeutsche Zeitung: “The belief that science and technology will save the Earth from the effects of climate breakdown is misleading. Nothing will save our children and grandchildren from an Earthly hell. Unless: we organise resistance against the greed of global capitalism.”"

7/8

skarthik,
@skarthik@neuromatch.social avatar

Coda:

Reading these excerpts on Weizenbaum's "no human being could ever fully understand another human being", I am reminded of a paragraph from the great anthropologist David Graeber's less read work, "Lost People" that states this quite stunningly:

"If it is really true... that what makes us human is above all our capacity to make history, and if history consists of actions that could not have been predicted be­forehand, then that would mean that the fundamental measure of our human­ity lies in what we cannot know about each other. To recognize another person as human would then be to recognize the limits of one's possible knowledge of them. Their humanity is inseparable from their capacity to surprise us."

For too long, in the name of science and technology, for mostly nefarious and monetary purposes, the naysayers have insisted on making machines out of humans, and rendered them soulless and disenchanted. It's time to put the soul of value back into humans.

8/8

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Artificial intelligence, he came to believe, was an “index of the insanity of our world.”

Today, the view that artificial intelligence poses some kind of threat is no longer a minority position among those working on it. There are different opinions on which risks we should be most worried about, but many prominent researchers, from Timnit Gebru to Geoffrey Hinton – both ex-Google computer scientists – share the basic view that the technology can be toxic. Weizenbaum’s pessimism made him a lonely figure among computer scientists during the last three decades of his life; he would be less lonely in 2023.

There is so much in Weizenbaum’s thinking that is urgently relevant now. Perhaps his most fundamental heresy was the belief that the computer revolution, which Weizenbaum not only lived through but centrally participated in, was actually a counter-revolution. It strengthened repressive power structures instead of upending them. It constricted rather than enlarged our humanity, prompting people to think of themselves as little more than machines. By ceding so many decisions to computers, he thought, we had created a world that was more unequal and less rational, in which the richness of human reason had been flattened into the senseless routines of code.

Weizenbaum liked to say that every person is the product of a particular history. His ideas bear the imprint of his own particular history, which was shaped above all by the atrocities of the 20th century and the demands of his personal demons. Computers came naturally to him. The hard part, he said, was life."

https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai

stefanhoeltgen, to ArtificialIntelligence German
@stefanhoeltgen@mastodon.social avatar

Lo and behold: the second edition of Marianna Baranovska-Bölter's and my essay collection on 's is "in print" now! It should be out in july!

kentindell, to random
@kentindell@mastodon.social avatar

deleted_by_author

  • Loading...
  • chukk_beslowair,

    @kentindell Simple is the false adjective, in my opinion. In a world where the degree of loneliness rises to astonishing heights, having the feeling that someone is actually listening to you, in this case ELIZA, is a very powerful force, which can never be underestimated.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • tester
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • ethstaker
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • ngwrru68w68
  • kavyap
  • GTA5RPClips
  • provamag3
  • cisconetworking
  • InstantRegret
  • khanakhh
  • cubers
  • everett
  • Durango
  • tacticalgear
  • Leos
  • modclub
  • normalnudes
  • megavids
  • anitta
  • lostlight
  • All magazines