#AI#GenerativeAI#OpenAI#ChatGPT: "This contradiction is at the heart of what makes OpenAI profoundly frustrating for those of us who care deeply about ensuring that AI really does go well and benefits humanity. Is OpenAI a buzzy, if midsize tech company that makes a chatty personal assistant, or a trillion-dollar effort to create an AI god?
The company’s leadership says they want to transform the world, that they want to be accountable when they do so, and that they welcome the world’s input into how to do it justly and wisely.
But when there’s real money at stake — and there are astounding sums of real money at stake in the race to dominate AI — it becomes clear that they probably never intended for the world to get all that much input. Their process ensures former employees — those who know the most about what’s happening inside OpenAI — can’t tell the rest of the world what’s going on.
#OpenAI strikes Reddit deal to train its AI on your posts https://www.theverge.com/2024/5/16/24158529/reddit-openai-chatgpt-api-access-advertising
“Reddit has become one of the internet’s largest open archives of authentic, relevant, and always up-to-date human conversations about anything and everything. Including it in #ChatGPT upholds our belief in a connected internet, helps people find more of what they’re looking for, and helps new audiences find community on Reddit,” #Reddit CEO Steve Huffman says.
“The certain knowledge that Kevin Roose is a credulous dumbass who makes a jingle-bell sound if he nods his head real fast only does so much to moderate the obscenity and offensiveness of his ascribing ‘playful intelligence’ and ‘emotional intuition’ to a predictive text generator.”
Kevin Roose so desperately wants to live in the futures tech companies are selling that he’ll eagerly do their PR for them and buy into whatever illusions of intelligence they put in front of him so he can trick himself into believing they’ll actually be realized this time.
#NOYB, l’association de défense des droits numériques autrichienne qui attaque régulièrement les #GAFAM, vient de porter plainte en #Autriche contre ChatGPT. L’agent conversationnel d’OpenAI est incapable de donner des informations exactes sur les personnes, ou de corriger ses erreurs, déplore l’ONG, qui estime qu’ #OpenAI ne respecte pas le #RGPD.
🤖 NetBSD’s New Policy: No Place for AI-Created Code
— @linuxiac
“New development policy: code generated by a large language model or similar technology (e.g. ChatGPT, GitHub Copilot) is presumed to be tainted (i.e. of unclear copyright, not fitting NetBSD’s licensing goals) and cannot be committed to NetBSD.”
Am I the only one skeptical about modern developers focusing so much on making AI look and sound like humans? Is it god’s syndrome “create them to reflect their image” kind of thing? Because what I need from AI as an individual is do the mundane tasks and be recognizable. I don’t need it to be able to impersonate a virtual friend or anything.
Normal people using AI: look how stupid this shit is!!
Terence Tao using AI: As an experiment, I asked #ChatGPT to write #Python code to compute, for each 𝑛, the length 𝑀(𝑛) of the longest subsequence of (1,\dots,n) on which the Euler totient function ϕ is non-decreasing. For instance, 𝑀(6)=5, because ϕ is non-decreasing on 1,2,3,4,5 (or 1,2,3,4,6) but not 1,2,3,4,5,6. Interestingly, it was able to produce an extremely clever routine to compute the totient function (that I had to stare at for a few minutes to see why it actually worked), but the code to compute (M(n)) was slightly off: it only considered subsequences of consecutive integers, rather than arbitrary subsequences. Nevertheless it was close enough that I was able to manually produce the code I wanted using the initial GPT-produced code as a starting point, probably saving me about half an hour of work. (and I now have the first 10,000 values of (M)). The results were good enough that I would likely turn to GPT again to provide initial code for similar calculations in the future. chat.openai.com/…/a022e1d6-dddc-4817-8bbd-944a3e7…
Yeah this is the "Politically Correct" "tuning" of the model making it go "crazy"
I asked ChatGPT 3.5:
How much more lifting capacity in a dirigible does Hydrogen gas have vs. Helium gas?
Had to argue with it when GPT constantly said that Helium had more lifting power and was a lighter gas. (and kept pushing the "it's also SAFER" at me over-and-over.)
(Correct answer is: 8% more gross lift from Hydrogen for a given volume of gas.)
When OpenAI created its “Superalignment” team in summer 2023, the goal was for it to “steer and control future AI systems that could be so powerful they could lead to human extinction,” reports @engadget. “Less than a year later, that team is dead.”
Jan Leike, one of the team’s leaders, who quit earlier this week, posted a scathing statement on X showing the internal tensions between the safety team and the wider company.
“OpenAI is shouldering an enormous responsibility on behalf of all of humanity,” he wrote. “But over the past years, safety culture and processes have taken a backseat to shiny products.” Engadget has more.
#NetBSD joins the ranks of software projects that ban #AI generated code.
How they are going to enforce such ban is an obvious question lingering in the air.
Does it include only cases like “hey #ChatGPT write a suite of unit tests for this class”? Or also cases where #Copilot simply autocompletes a for loop while I’m typing it?
In the latter case, how would a hypothetical reviewer enforce the ban? How would the for loop autocompleted by Copilot, or the boilerplate population of hashmap values, look any different than one I would write myself?
And if the issue is with any code that isn’t directly written by a human, then why stop at modern AI generation? Why not include LINTers and traditional IDE autocomplete features?
I have no doubt that the projects that are announcing these no-AI policies have good intentions, but it’s probably time for all of us to have an honest talk.
Code completion isn’t a clear cut binary feature. It’s a big spectrum that goes from the old exuberant ctags to ChatGPT writing whole classes.
And code completion shouldn’t be banned. If it makes a developer more productive, and if the developer understands the code that is being completed, then such bans are akin to a “drivers should only use cars with manual transmission because we feel that it’s more manly”. It’s a conservative and elitist act of shunning out new productive tools because we can’t understand them and regulate them properly.
And more people need to call out the bluff: in cases where the AI only completes a few lines of code, its basically impossible to tell if that snippet was written by a human or an AI assistant.
I dagens digitala era förändras arbetsplatsernas dynamik snabbt, och teknologiska innovationer spelar en allt större roll i hur vi arbetar och kommunicerar. Ett av de verktyg som börjar revolutionera moderna arbetsplatser är Chat GPT Gratis. I denna artikel ska vi utforska rollen som Chat GPT Gratis spelar på dagens...
"Code generated by a large language model or similar technology, such as GitHub/Microsoft's Copilot, OpenAI's ChatGPT, or Facebook/Meta's Code Llama, is presumed to be tainted code, and must not be committed without prior written approval by core."
Solve a puzzle for me (sopuli.xyz)
Source
Chat GPT Gratis och dess Roll i Moderna Arbetsplatser (chatgptsv.se) Swedish
I dagens digitala era förändras arbetsplatsernas dynamik snabbt, och teknologiska innovationer spelar en allt större roll i hur vi arbetar och kommunicerar. Ett av de verktyg som börjar revolutionera moderna arbetsplatser är Chat GPT Gratis. I denna artikel ska vi utforska rollen som Chat GPT Gratis spelar på dagens...