imo the user experience of GitHub #Copilot stinks. Generating code is one of the tasks I trust an #LLMleast. I’d rather a chat interface so I can ask it to
Refactor
Generate files
Move files
Navigate
Ask questions about code
Understand a new code base
Sure, writing little bits of code is kinda cool, but also ehh 🤨 I’d rather just type it myself. Feels like a lot of opportunity left on the table
So, earlier today I wrote about how Google's Bard is in for a rude awakening because, according to their own researchers (now ex-googlers), large language models are impossible to secure
"Google Bard is a glorious reinvention of black-hat SEO spam and keyword-stuffing"
#LLMs "...can be #poisoned through their training data—both the data used in the initial training and fine-tuning...[it is possible to do] #KeywordManipulation and degrade output with 👉as few as a hundred toxic entries,👈..."
B/c: "... Large AI models are bound to be 👉dangerous👈. Their rushed deployment, especially at..."
I've got to be honest, I am not so keen on all this #llm#ai hype.
Now my son just figured out how to jailbreak #chatgpt and it's telling him how to make mustard gas 😵
People in security and computing have been saying for years - there's no cloud. There's just someone else's computer.
Right now, there's no AI. There's just someone else's work.
Stop calling generative text and image programs AI. It's inaccurate and insulting. They are just the evolution of corporate creative theft that's been going on as long as media corporations have existed.
@ianrosewrites There may be some cases when you're right - we should look at them. But I think you go too far. When I answer a student question I try and use everything I've ever read on the subject, in books, papers, on the internet. Is this creative theft? I think not. It's the way humans operate. Building on the shoulders of giants (and everyone else). Not a bad thing I think. #AI#LLM#chatgpt
“#AI-art generators are trained on enormous datasets, containing millions upon millions of copyrighted images, harvested without their creator’s knowledge, let alone compensation or consent. This is effectively the greatest art heist in history.”
All of the outrage over the redesigned Canadian passport is just more culture war bullshit, and our media is just feeeeeeeding into it. Tell me, how many times have you actually sat there and stared at the passport like it was some art book?
Seriously people, fuck right off with the bullshit that the rage farmers want to feed you.
The articles suggest that the redesign reflects a government that is self-hating, detached from the nation's identity, and focused on promoting a progressive agenda. They call for a better-designed passport that respects the country's history, heroes, and values. (2/2) #LLM#CanPoli#GPT4
This is a fun kind of ego surfing. In ChatGPT's world, I have been so prolific! I have written about evvvvrything!
In our world, of course, I've never written about cathedrals.
The pathetic thing is I felt compelled to Google the made-up article I haven't written. I mean, what if I forgot something? I know the machine bullshits, but my mind is fallible and my memory fails, maybe it is me who does not know my own work?
3/ The argument that the government has no resources to really understand #AI is also false.
The US gov, national labs, as well & major public universities have incredible levels of talent in AI.
All available to help craft regulations without capture.
It's also a familiar irony that an industry that has been mining public institutions for everyone with talent & trying to hire them away... is implying that there is no role for those institutions to play.
My daughter attends a #Waldorf school. Something I’ve been thinking is that the Waldorf methodology is probably #AI proof, in that the don’t rely on testing. It generally focuses on the whole person and fostering creativity, things we need for working with #LLM tech. It might be the education needed to live in an AI world
People in the know about #AI and LLMs: is there an upper limit to what's possible with an #LLM, or can you just keep throwing more compute power at training to make them better?
Suppose i would like to measure the amount of bias, discrimination, hallucination etc in tools like Bard, Bing, ChatGPT and others. Are there already standards and tools to measure that ?
There will be discussions whether model A is better/worse then model B, it would be nice to have some standards/benchmarks for evaluation ? 🤔 #AI#GenerativeAI#Evaluation#LLM
"You can design all the neural networks you want, you can get all the researchers involved you want, but without labelers, you have no ChatGPT," [OpenAI contractor Alexej Savreux] added. "You have nothing."
A beautiful day in Boston, and in anticipation of getting my walking boot off next Friday (🤞) I tried walking a bit on a dirt path with no issue and listening to talks for my #AcademicRunPlaylist! Also I found some wild asparagus, which was absolutely delicious. (1/9)
You know that internal #Google memo leaked last week about open #LLMs, here's an example of one of its points. Lots of things happening in the open source #AI model space. I'm seeing something new every day.
I'm understanding that the latest version of Google Bard uses some new kind of AI model called PaLM v2. How novel is this compared to what's currently powering tools like ChatGPT? How might it compare to LLaMa-based open-source work?
I think I read recently that a lot of the innovations in the AI space in the last decade or so came out of Google in some shape or form, but other companies productized the technology first. I wonder if Google's success with Bard might be due less to the underlying LLM technology and more to the breadth of other services that Google offers.
But I also heard they have a slimmer version of the Bard model that can run on mobile, called "Gecko"? That's potentially exciting.
What a wild time for technology, even if this is all statistical guessing behind the scenes.
Step 1: Everyone uses #StackOverflow to collect knowledge about how to do things with code
Step 2: Use #LLM to create a bot that answers stuff based on this data
Step 3: StackOverflow usage declines (already down 14%).
Step 4: If StackOverflow closes, LLM do not have new data for new frameworks to learn answers to common questions from.
Good grief. Elementor has integrated generative AI into their WordPress tools. It can do useful stuff like writing CSS for buttons from text descriptions of what you want the button to do.
But it can also write blogs. Maybe we can get other LLMs to read the blogs, then we can get on with doing useful things 🤷🏻♂️