NYTimes BREAKING NEWS - Tuesday, June 4, 2024 9:06 AM ET #OpenAI has not done enough to prevent its artificial intelligence systems from becoming dangerous, a group of former and current employees said.
The group published an open letter on Tuesday calling for leading A.I. companies to establish greater transparency and more protections for whistle-blowers. #AI#ArtificialIntelligence#technology
The mass exodus from #Windows to #Linux (and #Mac) due to #Windows11 and #AI continues. More and more articles, more and more youtube videos about it, or posts on forums. People are switching. If it continues like that, Linux should have 10% desktop marketshare by the end of the decade (and yes, that's a lot).
Media Chinese International Ltd, a Malaysia-based media group, announces a 44% staff reduction to cut rising publishing costs, affecting nearly 800 employees.
The company plans to leverage AI to monetize digital content and further reduce costs.
This move highlights the ongoing challenges in the media industry.
AI chatbots and large language models struggle to convey genuine empathy, a new study found, but that’s not the worst of it. Research led by a Stanford computer scientist shows that these conversation agents can also encourage toxic belief systems like Nazism, racism, and sexism. Live Science reports: https://flip.it/Q3T4Uu #Science#AI#ArtificialIntelligence#Chatbots#Empathy
In a new podcast, Linus(Tech) said that with the upcoming #Windows#AI bruhaha, a lot of users are going to move to #Chromebooks. But just today #Google announced that AI is coming on their #Chromebook line too. Maybe just a chatbot for now, but eventually, it'll be more integrated. The only option (for those who can't stand #Apple), is #Linux, on their existing, older PC. That's why distros running in low RAM are important.
Before I head off on a trip to various parts of not-Barcelona, I thought I’d share a somewhat provocative paper by David Hogg and Soledad Villar. In my capacity as journal editor over the past few years I’ve noticed that there has been a phenomenal increase in astrophysics papers discussing applications of various forms of Machine Leaning (ML). This paper looks into issues around the use of ML not just in astrophysics but elsewhere in the natural sciences.
The abstract reads:
Machine learning (ML) methods are having a huge impact across all of the sciences. However, ML has a strong ontology – in which only the data exist – and a strong epistemology – in which a model is considered good if it performs well on held-out training data. These philosophies are in strong conflict with both standard practices and key philosophies in the natural sciences. Here, we identify some locations for ML in the natural sciences at which the ontology and epistemology are valuable. For example, when an expressive machine learning model is used in a causal inference to represent the effects of confounders, such as foregrounds, backgrounds, or instrument calibration parameters, the model capacity and loose philosophy of ML can make the results more trustworthy. We also show that there are contexts in which the introduction of ML introduces strong, unwanted statistical biases. For one, when ML models are used to emulate physical (or first-principles) simulations, they introduce strong confirmation biases. For another, when expressive regressions are used to label datasets, those labels cannot be used in downstream joint or ensemble analyses without taking on uncontrolled biases. The question in the title is being asked of all of the natural sciences; that is, we are calling on the scientific communities to take a step back and consider the role and value of ML in their fields; the (partial) answers we give here come from the particular perspective of physics
arXiv:2405.18095
P.S. The answer to the question posed in the title is probably “yes”.
"Artificial stupidity is a term used within the field of computer science to refer to a technique of 'dumbing down' computer programs in order to deliberately introduce errors in their responses. "
Does anyone else hit the Shakespeare button on their work messages? It really messes with people. This is me asking for a better image or some dimensions:
Verily, my good friend, I doth require an improved image or dimensions to complete this order. Wherefore, beseech me to assist me in my endeavour.
"The significance of this development is profound: If AI provides the answers to all the searches that you’re looking for, there is no need to click on the source articles that provide the answers. If you don’t click on the source articles, the publishers do not receive any ad revenue, and if the publishers do not receive ad revenue, they cannot pay their writers to provide the answers for Google’s AI to steal."
KI generiert kein Wissen, sondern setzt gemäss voreingestellten Parametern und Algorithmen Wörter in eine in erster Linie grammatikalisch richtige Abfolge - in neuer Zusammensetzung.
Was darüber hinausgeht, bleibt am verfügbaren Datensatz an Informationen hängen. Sind die Informationen (aka der Datensatz) Müll, dann gibt KI grammatikalisch korrekten Müll in anderer Zusammensetzung wieder.