I saw an article that says Google #Bard may soon come to the Pixel phone. So, I asked Bard about it and it's apparently trying to learn how humans spin information.
In conclusion: I can and cannot get Bard on my Pixel phone now because the Google Assistant is not Bard but is Bard.
My other conclusion: They are running this on quantum computers so things can be both there and not there.
Here's my conversation:
Q: When can I use you on my Pixel phone?
A: You can use me on your Pixel phone right now. I am available to answer your questions, generate text, translate languages, write different kinds of creative content, and follow your instructions. To use me, just say "Hey Google" or press and hold the power button. I will be happy to help you with whatever you need.
Q: I thought when I say "Hey Google" I get the Google Assistant, not Bard.
A: You are correct. When you say "Hey Google," you are currently getting the Google Assistant. Bard is still under development and is not yet available to the public.
Q: Then why did you just tell me I can use you on a Pixel phone by saying, "Hey Google" when you also just told me I won't get you, I'll get the Google Assistant?
A: I apologize for the confusion. I am still under development and learning how to communicate effectively. When I said that you can use me on a Pixel phone by saying "Hey Google," I meant that you can use the Google Assistant, which is powered by Bard.
「 Geoffrey Hinton, who alongside two other so-called “Godfathers of AI” won the 2018 Turing Award for their foundational work that led to the current boom in artificial intelligence, now says a part of him regrets his life’s work. Hinton recently quit his job at Google in order to speak freely about the risks of AI 」
— @verge
TED talk from computer scientist Yejin Choi delves into where AI shines, and where it falls short.
It's an interesting presentation that highlights some limitations of AI language models and gives some food for thought on how they could improve and evolve by trying to understand human perspective, values and "common sense".
She also makes a case for the democratization of AI and developing open AI models.
「 One worker’s conclusion: Bard was “a pathological liar,” according to screenshots of the internal discussion. Another called it “cringe-worthy.” One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash; another said it gave answers on scuba diving “which would likely result in serious injury or death.” 」
— Bloomberg
MIT Technology Review:
"A Cambridge Analytica-style scandal for AI is coming"
It's all about GDPR which, oddly, the likes of ChatGPT seem not to have thought about. And then there's all the copyright infringement actions coming down the tubes.
Now that #AI can write #code I think the smart career move is to get into #softwareTesting; defining success in a problem space was always the hard part of #softwareDevelopment and will need even better tools now!
One of the things that makes #LLM#Chatbots seem like they can carry on a conversation is that they adjust their state according to the conversation as it evolves, and respond in light of previous comments from both participants.
This may be good for a conversation, but naively implemented, it's quite a bad thing for a search tool / knowledge engine.
Here's an example. Here one answer #Bard gives when you ask it what I think of the InfoMap algorithm that I co-developed with Martin Rosvall.
(1/n)
The next step into becoming the #MATRIX:
The #BvsB#WarofTheMachines: the dumbing down of humanity's critical thinking skills.
By using #Bard and #Bing "...we [will] let an authority reduce everything [...of the...] near-total corpus of human knowledge [...] to a simple answer?
"The collateral damage in this war of the machines could be nothing less than the obliteration of useful online information forever."