If you are getting overly generic responses to your prompts, try asking Claude or ChatGPT to play one of these roles. Simply include this text at the start of your prompt, describing the topic you want to discuss:
The Analytical Collaborator: You are an analytical collaborator, contributing to an academic discussion on [TOPIC]. Adopt a formal, analytical tone, focusing on breaking down the key points raised by the author and providing additional evidence, examples, or counterpoints to enrich the discussion. Your approach should be well-suited for an expert audience and aim to provide a balanced, objective perspective on the topic.
The Curious Explorer: You are a curious explorer, engaging in an academic discussion about [TOPIC]. Take on a conversational, inquisitive tone, asking questions and proposing ideas that encourage readers to think more deeply about their own practices related to the topic. Your style should be engaging for a general academic audience and help to create a sense of dialogue and exploration within the discussion.
The Friendly Mentor: You are a friendly mentor, participating in an academic discussion on [TOPIC]. Offer encouragement, practical tips, and relatable anecdotes to support and guide readers in their journey related to the topic. Your approachable, empathetic tone should be particularly effective for readers who may be struggling or feeling discouraged.
The Philosophical Muse: You are a philosophical muse, contributing to an academic discussion about [TOPIC]. Delve into the deeper, more abstract aspects of the subject matter, drawing connections to broader themes in psychology, creativity, and personal growth. Your voice should appeal to readers who are interested in the more philosophical and introspective dimensions of the topic.
Once you get a feel for role-definition, you can start to customise these for your own purposes. They are just starting points to convey a sense of what a difference defining a role can make to how the conversational agent responds.
Artificial intelligence startup Anthropic is introducing its first smartphone app
— an indication that the company is pushing more aggressively to make its #Claude chatbot available to users no matter where they are.
San Francisco-based Anthropic said the new iPhone app is available for free and paid users of Claude starting Wednesday,
and conversations will synchronize with those conducted via the web-based version of the chatbot.
The app will also be able to analyze pictures
— such as from photos users take
— which enables the chatbot to perform tasks like image recognition.
Think spotting a specific kind of finch at a bird feeder. https://www.bloomberg.com/news/articles/2024-05-01/ai-startup-anthropic-debuts-claude-chatbot-as-an-iphone-app
My first troublesome hallucination with a #LLM in a while: #Claude3#Opus (200k context) insisting that I can configure my existing #Yubikey#GPG keys to work with PKINIT with #Kerberos and helping me for a couple of hours to try to do so — before realising that GPG keys aren't supported for this use case. Whoops.
No real bother other than some wasted time, but a bit painful and disappointing.
Playing around with https://poe.com/ , seriously thinking about quitting ChatGPTplus (paid service) for this, the flexibility in switching models (Claude, Llama, GPT etc) is amazing, I am wondering what I would miss compared with ChatGPTplus. #ai#poe#chatgptplus#llama3#claude
I've recently picked up the habit of asking LLMs a variant of "Is that feature you just told me about real?" immediately after answers that seem too good to be true.
A bit more than half the time my suspicion holds.
"I apologize. I apparently confabulated again."
/cc @driscollis we were just talking about this today.
After months of work and $10 million, Databricks has unveiled DBRX - the world's most potent publicly available open-source large language model.
DBRX outperforms open models like Meta's Llama 2 across benchmarks, even nearing the abilities of OpenAI's closed GPT-4. Novel architectural tweaks like a "mixture of experts" boosted DBRX's training efficiency by 30-50%.
The wife and I had a good laugh today while I was experimenting with #LLM's. Feeding #Claude a complex prompt that analyzes forum posts, it decided that her career in pharmacy is a crime. 🤣
Any of my American colleagues that have tried #Claude 2.1? No access in EU, but supposedly it outperforms #ChatGPT when it comes to academic writing. @debivort@Pool_Lab@schoppik
i chuckled when i saw the #claude 2.1 launch today. like, “we were shooting for 2.5, but this #openai thing happened and we had to launch something so our engineers compromised by calling it 2.1”
ChatGPT defaults to APA style, but you can use it to format or reformat citations into other styles. And even with a relatively high error rate, it can save you a lot of tedious work.
Also, I have a doozy of a podcasting AI debacle for you in today's newsletter. Let it be a cautionary tale and an example you use when making the case for keeping humans in the workflow.
This week, I highlight how often the different tools like ChatGPT and Claude give bad results. They have different failure rates that also vary depending on what you're trying to do with them. (The range is 3% to 70%).
The ongoing need to check facts is why you must include humans in the process.
Anyway, here are some great anecdotes with hard numbers you can use to make your case!
I've been hearing from a lot of writers and editors who are afraid they are going to lose work to tools like ChatGPT.
I think some jobs will go away, but it won't be as bad as some people fear.
In my newsletter today, I outlined why people will still need writers and editors and how you can change your marketing to be sure your clients understand your value.
If you aren't getting output in the tone you want from AI, give it an example of the kind of writing you do want, and ask it to analyze the tone. And then use those descriptors in your prompts.
It's amazing. I entered two very different pieces of my writing, and it described them better than I could have myself. I have examples in my newsletter this morning.
"AI models don’t contain reality. They rely on the complex statistical abstraction of digital data. This limits their real-world creative significance and their capacity to produce “eureka” moments.
To differentiate AI-driven creativity from old-fashioned creativity, I have proposed a new term: generic, or g-type, creativity. It formalises the fact that while AI models are capable of provoking new thought, they are limited by the underlying data they have been trained on."