dys_morphia, to random
@dys_morphia@sfba.social avatar

At work, almost everything is very optimistic about generative AI (LLMs but not just). We all talk about the potential of it in the best possible outcome. So on here I try to balance that by following people who offer critiques—I don’t mean doomy AGI outcomes but people who point out things like maybe the models we’re so hyped about don’t actually have the potential to improve to the point of true usefulness in all these use cases

dys_morphia,
@dys_morphia@sfba.social avatar

It’s been easier for me to grasp the ethical arguments about the training data, algorithmic discrimination, and the harms of people thinking a chatbot is a human. Similarly the economic arguments about replacing skilled workers (eg online support agents) with bots trained on their previous work.

But I’m out of my depth when it comes to the actual mechanism of LLMs and diffusion models. So I have no grounded intuitions about their potential growth.

So I follow smart critics

jmac, to programming
@jmac@masto.nyc avatar

Last night I successfully used one of the big thingies to get me, a novice, over the hump with writing a program for a specific task.

As far as I can tell, it made up a fictional library with a reasonable -guess API, and then showed how it might work. It absolutely did not work, but it gave me ground to start from, and unstuck this long-stalled project. I had it working within an hour.

bornach, to random
@bornach@masto.ai avatar

rely a lot on the human to do the reasoning for it, and even then (Creative) has problems with following the guidance. Notice I only specified the use of "unwieldy" and never required it to use "beard" or "weird" yet the got fixated on that instead.

absamma, (edited ) to random
@absamma@toolsforthought.rocks avatar

Why are GPT models so big? It's because they contain a lot of knowledge that could potentially be externalized into a cheaper to run knowledge graph, freeing up large language models to focus more on language fluency & reducing parameters considerably. You don't need to machine learn Obama's date of birth on such an expensive architecture!

https://youtu.be/WqYBx2gB6vA

janriemer, to random

Unit tests are NO boilerplate! They are a very important part of engineering correct and maintainable software!

Please get this right!

glecharles, to ai
@glecharles@zirk.us avatar

Finished @baldur's provocative book, The Intelligence Illusion, last night and wrote up a short review on The Storygraph.

tldr: Buy it, read it, spread the word!

https://app.thestorygraph.com/reviews/f8651234-2105-4280-97bc-24836f83b867

Jigsaw_You, to ai Dutch
@Jigsaw_You@mastodon.nl avatar

Pure alarmism. They literally have no plan at all to regulate -tech that doesn’t exist (and might never exist). 😂

@machinelearning

https://openai.com/blog/governance-of-superintelligence

heiseonline, to ChatGPT

is trained with data from the 's dark brother?

Researchers have developed an model trained with data from the Darknet – DarkBERT's source are hackers, cybercriminals, and the politically persecuted.

https://www.heise.de/news/DarkBERT-is-trained-with-data-from-the-Dark-Web-ChatGPT-s-dark-brother-9061407.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege

ppatel, to ai
@ppatel@mstdn.social avatar

Note that the training data heavily relies on the Bible and its translations. Lots of bias there.

Meta unveils open-source models it says can identify 4,000+ spoken languages and produce speech for 1,000+ languages, an increase of 40x and 10x respectively.

https://www.technologyreview.com/2023/05/22/1073471/metas-new-ai-models-can-recognize-and-produce-speech-for-more-than-1000-languages/

thomasapowell, to random
@thomasapowell@fosstodon.org avatar

So writing helper AIs are dropping everywhere now. You, too, can sound like a Linkedin person with such insightful gems like this

"In a world of complex social media language and trendy hashtags, let us remember the importance of clear and concise communication. 🌟💬 📝"

It even added emojis ... ugh. All socials will be filled with SPAM nonsense like this via these bullshit cannons in 5..4..3...

chris_hayes, to ai
@chris_hayes@fosstodon.org avatar

Well, this is unexpected - Neeva is shutting down their web search.

I used them in 2023 until I switched to Kagi a couple weeks back when Neeva started dropping features.

Important note - Neeva itself is not shutting down. The company is pivoting to apply its LLM + Search expertise to enterprises instead of consumers.

https://neeva.com/blog/may-announcement

jchyip, to ai
@jchyip@mastodon.online avatar

Language Models Meet World Models: Embodied Experiences Enhance Language Models https://arxiv.org/abs/2305.10626

bibliotecaria, to random
Jigsaw_You, to ai Dutch
@Jigsaw_You@mastodon.nl avatar

“I think it made it really clear that unless there is external pressure to do something different, companies are not just going to self-regulate. We need regulation and we need something better than just a profit motive.”

@timnitGebru

@machinelearning

https://www.theguardian.com/lifeandstyle/2023/may/22/there-was-all-sorts-of-toxic-behaviour-timnit-gebru-on-her-sacking-by-google-ais-dangers-and-big-techs-biases

coreyspowell, to ai
@coreyspowell@mastodon.social avatar

I'm fascinated by the weird clip art people use to illustrate stories about AI.
I'm fascinated, too, by the differences between American & Eastern European approaches to that AI art.
(Left: Popular Mechanics. Right: ForkLog)

An AI illustration from an story in the Estonian ForkLog blog.

CharlieMcHenry, to ai
@CharlieMcHenry@connectop.us avatar

Elon Musk Sends a Warning to Microsoft - Musk and Microsoft already fighting over and folks. This could become the corporate version of a bloody battle very soon. Warning shots being fired as I write. Tells you just how important and disruptive is going to be. https://www.thestreet.com/technology/elon-musk-sends-a-warning-to-microsoft

petersuber, (edited ) to ChatGPT
@petersuber@fediscience.org avatar

on .
https://cs.stanford.edu/~knuth/chatGPT20.txt

Among other good points:

"I find it fascinating that novelists galore have written for decades abt scenarios that might occur after a 'singularity' in which superintelligent machines exist. But as far as I know, not a single novelist has realized that such a singularity would almost surely be preceded by a world in which machines are 0.01% intelligent (say) & in which millions of real people would be able to interact w/ them freely."

kjr, to programming
thecontinent, to ai
@thecontinent@mas.to avatar

“They’ve put the equivalent of an oil spill into the information ecosystem. Who gets to profit from it? And who gets to deal with the waste? It’s the exact same pattern as imperialism.” https://open.substack.com/pub/continent/p/their-god-is-not-our-god?utm_source=direct&r=14kg56&utm_campaign=post&utm_medium=web

ct_bergstrom, to random
@ct_bergstrom@fediscience.org avatar
HistoPol,
@HistoPol@mastodon.social avatar
janriemer, to ai

is just one of the best films ever!

If you've seen it already, watch it again - but this time with in mind. It gives a whole new dimension to the film. 🙃

Jigsaw_You, to ChatGPT Dutch
@Jigsaw_You@mastodon.nl avatar

and its bedfellows are – among many other things – social media on steroids. And we already know how these platforms undermine democratic institutions and possibly influence elections. The probability that important elections in 2024 will not be affected by this kind of is precisely zero.

https://www.theguardian.com/commentisfree/2023/may/20/when-the-tech-boys-start-asking-for-new-regulations-you-know-somethings-up-ai-chatgpt?CMP=Share_iOSApp_Other

anantshri, to ai

Last couple of months can simply be characterised as the “advent of AI” finally in the mainstream world and with that there is a floodgate opened which has brought with itself an array of fascinating things. I have been engaging with this area for some time and have been making comments in social media I thought it might be better to write an article to cover various observations in a single place. I am penning this article to look at this in an optimistic fashion embracing the reality that these things are here to stay and focus on how they can be utilised for the betterment rather then cowering away in fear or ignoring them all together.

There are a few important words that you will encounter in this world. AI or what is called Artificial intelligence is the ultimate goal a program or a set of programs that can not just behave like human’s but rather process and think like a human being. For this what we have right now are GPT’s or Generative Pre-trained Transformers and LLM (Large Language models). Then there is Reinforcement Learning From Human Feedback (RLHF) and a whole set of other keywords like quantisation, tokenisation, neural network, weights and biases in learning models, Natural language processing. Now i dont want to deep dive into those terms as each term can be expanded to fill and entire book or collections of books. I wanted to focus on how this is shaping our new world. if you are new to this world but want to deep dive in technical side of equation I would suggest starting from this article by Stephen Wolfram

What Is ChatGPT Doing … and Why Does It Work?

Large Language Models vs. Search Engines

A noticeable shift is taking place in the way we seek information from search engines to interacting with large language models like ChatGPT. Traditionally, the aim with search engines was to be as succinct as possible, condensing your query down to a select few keywords to avoid diluting the search results. With ChatGPT, we find ourselves elaborating more, providing more context to narrow down the potential answers. This stems from the fundamental difference in how we perceive these technologies.

Search engines are treated as tools – we direct them with precise instructions, while ChatGPT is seen as a language understanding system that we converse with. The conversational style allows users to provide more detail, which in turn helps the AI to generate more precise answers. Furthermore, search engines have been manipulated over time with SEO strategies, while AI systems have not yet been extensively exploited in the same way. Finally, while search engines present a variety of options, AI models give a direct response, hence, we attempt to ensure our input is as clear and comprehensive as possible to secure the most accurate output.

Reference: https://social.anantshri.info/@anant/statuses/01GSMMTPSMC5SDY7PRA6FQT69Q

Education and Generative AI

One of the key areas where large language models can revolutionise operations is academia. Instead of sticking to conventional methods of teaching, educators could harness the potential of tools like ChatGPT to enhance student learning. By encouraging students to explore topics using ChatGPT, teachers can transform traditional classroom dynamics. The initial half of the class could involve students sharing their findings from their AI-guided research, while the remaining time can be dedicated to verifying the accuracy of these findings, clarifying doubts, and explaining any misconceptions.

What we need to understand is that while it feels like what i am suggesting is learning in isolation, one thing that I have realised over time is that people like to learn in public but prefer making mistakes or being made aware of mistakes they committed in private. Also it hurts when a human finds out about your mistakes far more then when a program finds your mistakes you correct and move on. Hence these LLM’s provide an interesting safe ground to play around make mistakes and learn from them.

It’s crucial to understand that AI systems aren’t infallible – they don’t hold all the solutions and can sometimes struggle with certain queries. That’s the perfect opportunity for teachers to intervene, posing thought-provoking questions to assess a student’s grasp of a subject rather than just their capacity to remember facts. Such an approach leverages the benefits of AI while also acknowledging its shortcomings. It’s a balancing act – as the saying goes, AI would make a good tool but a bad master.

Reference: https://social.anantshri.info/@anant/statuses/01GSQHWTYFM5Z5RE766V6CHCMR

Additionally I would definitely recommend you to watch this video of Sal Khan talking about his adventures with AI and education combo.

Sal Khan of Khan Academy talking about AI and EducationPrivacy and Generative AI

As i suggested above for students its an interesting playground to ask questions and get answers without being judged. This immediately would interest people to use these systems for a fair few use cases, including discussing personal problems or trying to replace it as a therapist. Be aware I am not suggesting the use of AI/LLM for asking anything and everything. you need to apply caution. Use the system dont divulge info where you end up being used by system. Interesting reads would be these links here, here, this and this.

Another interesting use-case that I have seen people exploring is to open their thoughts or notes to AI and try to get insights from them. It sounds like a good idea, but be aware the data is going to the other side although OpenAI has assured that data provided via the API won’t be used for training their model and they have provided option to disable the same via the interface, people need to apply caution. What i keep saying ,”Once the data is out in the open, it is to be considered public.” So decide before you share your data with any public system.

OpenAI Patches Account Takeover Vulnerabilities in ChatGPT

I am not saying we dont leverage AI, all i am saying is this might not be the time to leverage a SaaS API to push your data across to someone else. There is a lot of work going on in making these systems work on individual machines some of the efforts are listed here : https://github.com/imartinez/privateGPT and https://gpt4all.io/index.html to list the top two that I have seen but a lot of activity going on in this area so keep an eye open a better and safe solution would be out soon. https://www.reddit.com/r/LocalLLaMA/ is an interesting subReddit to watch for such innovations and development.

Reference: https://social.anantshri.info/@anant/statuses/01GVMRXZ20CB0XQFPFV9BHP2F3

Prompt Engineering and shift in directions on how we think

I recently read this interesting article by Martin Fowler on his discussion with Xu Hao https://martinfowler.com/articles/2023-chatgpt-xu-hao.html. an interesting observation that surfaces is the parallel between prompt engineering and the ability to clearly document thoughts. If we become adept at engineering effective prompts, it inherently improves our competence in documenting our thoughts clearly, and vice versa.

This observation underscores a potential benefit of engaging with large language models, namely the potential to improve documentation quality. This could potentially address a long-standing challenge within the IT industry – the need for better documentation. The connection between prompt engineering and clear thought documentation, though not explicitly stated in the Fowler article, becomes apparent upon careful reflection and carries significant implications for IT practices.

Reference : https://social.anantshri.info/@anant/statuses/01H0F2BVHD52NXKAK544AEAGCD

LLM’s and what they tell about the world

Finally, I have come to some interesting realisation about LLM and what this sudden boom also reveals about our world’s inner workings. The most important is the extent to which our world operates on repetition. Exams primarily test rote learning and the capacity to recall information. Furthermore, intelligence is often equated with the ability to regurgitate information coherently. We are testing for recall capabilities which is why computers always seem more intelligent.

Reference: https://social.anantshri.info/@anant/statuses/01GZ5NH9GH0RBQK7J4W0CAKRN4

Large language models like ChatGPT lend an illusion of intelligence due to traits that we, as humans, typically associate with intelligence. These include eloquent sentences with minimal grammatical errors, words and phrases that seem to make sense, common yet overlooked aspects of our environment, and the ability to quote eloquently without necessarily sticking to the original verbatim. In essence, the appearance of intelligence in AI largely rests on our perception of intelligence itself.

Reference: https://social.anantshri.info/@anant/statuses/01GZ5YB6W32Q3RF5RYBS8S63CK

I think it might be prudent to say that we all need to adjust our own understanding of what we call intelligence, leverage the barrage of tools and floodgates of technical innovations that these new technology is revealing to all of us and maybe double down on what makes humans unique and “Intelligent”. How we adapt to these changes and harness their potential will determine their efficacy. As with all emerging technologies, the path forward is uncertain but filled with potential.

https://blog.anantshri.info/my-thoughts-on-the-new-and-emerging-world-of-gpt-ai-llm/

therealjimlove, to random

Has AI already become sentient? Listen to this re-enactment of the interview between the Google researchers and their AI LaMDA as see what you think.

https://www.itworldcanada.com/article/has-ai-already-become-sentient-you-might-be-surprised-hashtag-trending-the-weekend-edition/539296

grumpybozo,
@grumpybozo@toad.social avatar

@therealjimlove @briankrebs Lemoine’s "interview" is no more convincing today than it was a year ago. LaMDA is no more "Artificial Intelligence" than are , , or any other large language model. Calling them "AI" is fraudulent. The charlatans wringing their hands over the bullshit fears of 'takeoff' and 'hyper-intelligent AGI' all have vested interests in that fraud.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • ethstaker
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • khanakhh
  • JUstTest
  • InstantRegret
  • GTA5RPClips
  • Durango
  • normalnudes
  • cubers
  • tacticalgear
  • cisconetworking
  • tester
  • modclub
  • provamag3
  • anitta
  • Leos
  • lostlight
  • All magazines