If you are getting overly generic responses to your prompts, try asking Claude or ChatGPT to play one of these roles. Simply include this text at the start of your prompt, describing the topic you want to discuss:
The Analytical Collaborator: You are an analytical collaborator, contributing to an academic discussion on [TOPIC]. Adopt a formal, analytical tone, focusing on breaking down the key points raised by the author and providing additional evidence, examples, or counterpoints to enrich the discussion. Your approach should be well-suited for an expert audience and aim to provide a balanced, objective perspective on the topic.
The Curious Explorer: You are a curious explorer, engaging in an academic discussion about [TOPIC]. Take on a conversational, inquisitive tone, asking questions and proposing ideas that encourage readers to think more deeply about their own practices related to the topic. Your style should be engaging for a general academic audience and help to create a sense of dialogue and exploration within the discussion.
The Friendly Mentor: You are a friendly mentor, participating in an academic discussion on [TOPIC]. Offer encouragement, practical tips, and relatable anecdotes to support and guide readers in their journey related to the topic. Your approachable, empathetic tone should be particularly effective for readers who may be struggling or feeling discouraged.
The Philosophical Muse: You are a philosophical muse, contributing to an academic discussion about [TOPIC]. Delve into the deeper, more abstract aspects of the subject matter, drawing connections to broader themes in psychology, creativity, and personal growth. Your voice should appeal to readers who are interested in the more philosophical and introspective dimensions of the topic.
Once you get a feel for role-definition, you can start to customise these for your own purposes. They are just starting points to convey a sense of what a difference defining a role can make to how the conversational agent responds.
Artificial intelligence startup Anthropic is introducing its first smartphone app
— an indication that the company is pushing more aggressively to make its #Claude chatbot available to users no matter where they are.
San Francisco-based Anthropic said the new iPhone app is available for free and paid users of Claude starting Wednesday,
and conversations will synchronize with those conducted via the web-based version of the chatbot.
The app will also be able to analyze pictures
— such as from photos users take
— which enables the chatbot to perform tasks like image recognition.
Think spotting a specific kind of finch at a bird feeder. https://www.bloomberg.com/news/articles/2024-05-01/ai-startup-anthropic-debuts-claude-chatbot-as-an-iphone-app
Anthropic released an iOS app for their Claude 3 LLM.
I’m past the stage that dismisses LLMs. Some variant will be a useful tool for me. For various tasks. Some I haven’t thought of yet. I’m currently using them as research assistants on topics I’m writing about. To see if detailed prompts (several hundred words with topic headings etc) get responses that include things I’d overlooked. I don’t use any generated text directly.
I might use Claude as a tutor for some studying I plan. #LLM
I think I'll settle on paying for Anthropic Claude 3 via their web interface (I'll check out the API access at some point too), and use PAYG API credits via Drafts for access to GPT 4. The GPT 4 selector in the API currently redirects to gpt-4-turbo.
My first troublesome hallucination with a #LLM in a while: #Claude3#Opus (200k context) insisting that I can configure my existing #Yubikey#GPG keys to work with PKINIT with #Kerberos and helping me for a couple of hours to try to do so — before realising that GPG keys aren't supported for this use case. Whoops.
No real bother other than some wasted time, but a bit painful and disappointing.
Playing around with https://poe.com/ , seriously thinking about quitting ChatGPTplus (paid service) for this, the flexibility in switching models (Claude, Llama, GPT etc) is amazing, I am wondering what I would miss compared with ChatGPTplus. #ai#poe#chatgptplus#llama3#claude
I've recently picked up the habit of asking LLMs a variant of "Is that feature you just told me about real?" immediately after answers that seem too good to be true.
A bit more than half the time my suspicion holds.
"I apologize. I apparently confabulated again."
/cc @driscollis we were just talking about this today.
Does anyone know of a good (preferably free) tool / script that can sort photos into folders for year and month automatically by using the photos’ metadata? Seems like something that should be relatively simple.
After months of work and $10 million, Databricks has unveiled DBRX - the world's most potent publicly available open-source large language model.
DBRX outperforms open models like Meta's Llama 2 across benchmarks, even nearing the abilities of OpenAI's closed GPT-4. Novel architectural tweaks like a "mixture of experts" boosted DBRX's training efficiency by 30-50%.
Two footmen dressed in white approach the vehicle as it arrives. One opens the rear door. #Guo#Ping, one of #Huawei's rotating chairmen, steps forward and extends a hand as the guest emerges.
After walking a red carpet, the two men enter the magnificent marble-floored building, ascend a stairway, and pass through French doors to a palatial ballroom.
Several hundred people arise from their chairs and clap wildly.
The guest is welcomed by Huawei's founder, #Ren#Zhengfei, whose sky-blue blazer and white khakis signify that he has attained the power to wear whatever the hell he wants.
After some serious speechifying by a procession of dark-suited executives, Ren
—who is China's Bill Gates, Lee Iacocca, and Warren Buffett rolled into one
—comes to the podium.
Three young women dressed in white uniforms enter the room, swinging their arms military style as they march to the stage, then about-face in unison as one holds out a framed #gold#medal the size of a salad plate.
Embedded with a red Baccarat crystal, it depicts the Goddess of Victory and was manufactured by the Monnaie de Paris. Ren is almost glowing as he presents the medal to the visitor.
This #honored#guest is not a world leader, a billionaire magnate, nor a war hero. He is a relatively unknown Turkish academic named #Erdal#Arıkan.
Throughout the ceremony he has been sitting stiffly, frozen in his ill-fitting suit, as if he were an ordinary theatergoer suddenly thrust into the leading role on a Broadway stage.
Arıkan isn't exactly ordinary.
Ten years earlier, he'd made a major discovery in the field of information theory.
Huawei then plucked his theoretical breakthrough from academic obscurity and, with large investments and top engineering talent, fashioned it into something of value in the realm of commerce.
The company then muscled and negotiated to get that innovation into something so big it could not be denied:
the basic #5G#technology now being rolled out all over the world.
Huawei's rise over the past 30 years has been heralded in China as a triumph of smarts, sweat, and grit. Perhaps no company is more beloved at home
—and more vilified by the United States.
That's at least in part because Huawei's ascent also bears the fingerprints of China's nationalistic industrial policy and an alleged penchant for intellectual property theft;
the US Department of Justice has charged the company with a sweeping conspiracy of misappropriation, infringement, obstruction, and lies.
As of press time, Ren Zhengfei's #daughter was under house arrest in Vancouver, fighting extradition to the US for allegedly violating a ban against trading with Iran.
The US government has banned Huawei's 5G products and has been lobbying other countries to do the same. Huawei denies the charges; Ren calls them political.
Huawei is settling the score in its own way. One of the world's great technology powers, it nonetheless suffers from an inferiority complex.
Despite spending billions on research and science, it can't get the respect and recognition of its Western peers. Much like China itself.
So when Ren handed the solid-gold medal
—crafted by the French mint!
—to Erdal Arıkan, he was sticking his thumb in their eye.
ERDAL ARIKAN WAS born in 1958 and grew up in Western Turkey, the son of a doctor and a homemaker.
He loved science.
When he was a teenager, his father remarked that, in his profession, two plus two did not always equal four.
This fuzziness disturbed young Erdal; he decided against a career in medicine. He found comfort in engineering and the certainty of its mathematical outcomes.
“I like things that have some precision,” he says. “You do calculations and things turn out as you calculate it.”
Arıkan entered the electrical engineering program at Middle East Technical University. But in 1977, partway through his first year, the country was gripped by political violence, and students boycotted the university.
Arıkan wanted to study, and because of his excellent test scores he managed to transfer to #CalTech, one of the world's top science-oriented institutions, in Pasadena, California.
He found the US to be a strange and wonderful country. Within his first few days, he was in an orientation session addressed by legendary physicist #Richard#Feynman. It was like being blessed by a saint.
The field was still young, launched in 1948 by #Claude#Shannon, who wrote its seminal paper while he was at Bell Labs;
he would later become a revered MIT professor.
Shannon's achievement was to understand how the hitherto fuzzy concept of information could be quantified, creating a discipline that expanded the view of communication and data storage.
By publishing a general mathematical theory of information
—almost as if Einstein had invented physics and come up with relativity in one swoop
—Shannon set a foundation for the internet, mobile communications, and everything else in the digital age.
The subject fascinated Arıkan, who chose #MIT for graduate studies.
There was one reason: “#Bob#Gallager was there,” he says.
Robert Gallager had written the textbook on information theory. He had also been mentored by Shannon's successor.
In the metrics of the field, that put him two steps from God.
“So I said, if I am going to do information theory,” Arıkan says, “MIT is the place to go.”
By the time Arıkan arrived at MIT, in 1981, Gallager had shifted his focus and was concentrating on how data networks operated.
Arıkan was trembling when he went to Gallager's office for the first time. The professor gave him a paper about packet radio networks.
“I was pushing him to move from strict information theory to looking at network problems,” Gallager says.
“It was becoming very obvious to everyone that sending data from one place to another was not the whole story
—you really had to have a system.”
What's going on is that Anthropic "prompt engineers" have redefined self-awareness to mean 'has contextual information.' That the system is using language then allows them to delude themselves into universalizing their definition.
Saw a similar problem in AI research in the 80s: researchers might define a "frame" holding contextual info, & when their program produced solutions that referenced the frame, construed that as a form of self-awareness. #AIHype#Claude
Ich muss mal mich ernsthaft mit #KI zum #Nutzen auseinandersetzen. Ich vertraue dem nicht ganz, da es "gefühlt" für den #Missbrauch von #Macht, sprich #Unterwerfung, hauptsächlich genutzt wird. Abgesehen davon wird #Arbeit in Billiglohnländern ausgenutzt um die #Daten auszuwerten und zu zuordnen. Verkauft wird es als #Intelligenz doch wie kläre ich Menschen auf, abgesehen das KI nicht gleich KI ist aber künstlich und keine Intelligenz... 🤔
🧵 [ENG] …well, A.I. is not just A.I. and it is sold by means of incredible promises and hopes. That's another reason to be skeptical.
»A.I. Has a Measurement Problem:
Which A.I. system writes the best computer code or generates the most realistic image? Right now, there’s no easy way to answer those questions.«