timbray, to random
@timbray@cosocial.ca avatar

Mark Pilgrim famously remarked that the difference between a corporate & personal blog is that you can say “motherfucker” on your personal blog.

I wonder if the human-vs-LLM language battleground will be similar. I wonder if there’ll be an incentive to write in an idiosyncratic highly-individual hard-edged never-ever-bland style just so as to sound like a human.

You don’t think there’ll be a battleground? Silly you.

NatureMC, to ai
@NatureMC@mastodon.online avatar

New analysis: like OpenAI's and 's consume an astronomical amount of and — or, more precisely, the massive that power them do. By 2027, these server farms could use anywhere between 85 to 134 terawatt hours of per year, 0.5 percent of the entire globe's energy demands." https://futurism.com/the-byte/ai-electricity-use-spiking-power-entire-country

ricardoharvin, to llm
@ricardoharvin@mstdn.social avatar

If you're worried about -based , you're focusing on the wrong thing and may lack imagination.

This, and related developments (I can't view them as advancements, knowing how this all will end) are what's going to end the human race as we've known ourselves.

Much good can be derived from technologies like this, but we—being as we are—will ultimately go much to far.

I don't think we're prepared for our instant evolution (and, separately, eventual mechanization).

https://www.theguardian.com/science/2023/jun/14/synthetic-human-embryos-created-in-groundbreaking-advance

mariyadelano, to ai
@mariyadelano@hachyderm.io avatar

I got early access to Google's new powered search experience, and I wrote about my first impressions here: https://kalynamarketing.com/blog/google-sge-review

Main thoughts:
I'm impressed. Google created a refreshing implementation of generative AI to search.

SGE never made me feel like it was trying to be more than a search engine, or force a clunky chatbot dynamic on me.

I don't know how practical it will be, but I am pleasantly surprised so far

FeralRobots, to llm
@FeralRobots@mastodon.social avatar

I get a lot of pushback when I admonish people to accurately describe what an is doing - I'm told 'that ship has sailed' or 'just deal with the fact that people say they think'.

It matters. It fucking matters. It matters because using the wrong words for it indicates that people think those "aswers" are something that they're not - that they can never, ever be.

[srcs: https://bsky.app/profile/phyphor.one-dash.org/post/3knxrotc2k22x, https://bsky.app/profile/astrokatie.com/post/3k5kaswwgpv2u]

I don’t think it can be emphasized enough that large language models were never intended to do math or know facts; literally all they do is attempt to sound like the text they’re given, which may or may not include math or facts. They don’t do logic or fact checking — they’re just not built for that

underdarkGIS, to llm
@underdarkGIS@fosstodon.org avatar

🤔 Where do we stand re ? Still no magic bullet to solve our problems?

I must be missing something.

Any pointers, ?

seldoncrisis, to ai
drahardja, (edited ) to fediverse
@drahardja@sfba.social avatar

server admins: Have you considered adding a Terms of Use clause that prohibits the use of posts for training without explicit user consent? I feel like abuse of user-generated content (text and other media included) for training is already upon us, and I wonder if we shouldn’t set ourselves up for legal recourse at some point in the future if we ever need to.

Social media is one of the most ready pools of materials for training models. This tends to continue the trend of generating profits for private corporations by harvesting “public” goods without compensation, especially from artists who work hard to create quality media. I hope the Fediverse can exclude itself from this phenomenon somehow.

This seems especially relevant for .

randomgeek, to aitools

Besides the ethics? The tool itself breaks my flow.

Nah on Copilot for now

https://randomgeekery.org/post/2023/06/nah-on-copilot-for-now/

TimHoelscherX, to ChatGPT

I haven’t found ChatGPT’s composition to be all that compelling or well-done. I mean, it’s impressive that the technology can do it at all, but does anyone have an example of anything like a 3k word story that is well-written and fun to read with a unique voice? Not saying this could never happen (with future models), but every time I’ve prompted a story it’s pretty weak and obviously AI-generated. What am I missing?

hankg, to ai

With this morning's IntelliJ update I started seeing these AI prompts. While it is exciting to see it coming to desktop software not just up running in the browser I'm still not touching these things until it goes to local only running models. Even if I trusted all these companies with all this data I'm sick of feeding evena higher precentage of our digital lives into the data lakes of the same companies or their proxies (yes I'm referring to you OpenAI).
Introducing JetBrains AI and the In-IDE AI Assistant | The JetBrains Blog

ErikJonker, to ai
@ErikJonker@mastodon.social avatar
ct_bergstrom, to ChatGPT
@ct_bergstrom@fediscience.org avatar

Numerous sources claim that TurnItIn has a generative AI detector with something like 98% sensitivity and 99% specificity.

For example, the Washington Post, below.

This is completely implausible given that OpenAI themselves only claim to be able to achieve 26% sensitivity and 91% specificity.

So where does this wild claim come from? I think I've figured it out.

https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/

# #

drahardja, (edited ) to ai
@drahardja@sfba.social avatar

“Prompt engineering” is such a bizarre line of work. You’re trying to convince a machine trained on a huge pile of (hopefully) human-generated text to produce some useful output by guessing what sequence of human-like words you must put in to make it likely that the model will produce coherent, human-like output that is good enough to pass downstream.

You really have no idea how your prompt caused the model to produce its output (yes, you understand its process, but not the actual factors that contribute to its decisions). If the output happens to be good, you still have no idea how far you can push your input before the model returns bad output.

Prompt engineers talk to the model like a human, because that’s the only mental model they have for predicting how it will respond to their inputs. It’s a very poor metaphor for programming, but there is nothing better to reach for.

bortzmeyer, to ChatGPT French
@bortzmeyer@mastodon.gougere.fr avatar

Est-ce légitime de récolter des pages Web pour entrainer des IA ?

https://www.bortzmeyer.org/collecte-pour-l-ia.html

petersuber, (edited ) to ai
@petersuber@fediscience.org avatar

As AI tools for writing become more common, let me throw one more worry into the mix: Students who write well without AI assistance may be falsely accused of by teachers using imperfect tools to detect AI-assisted writing.

, ,
@academicchatter

mauve, to llm
@mauve@mastodon.mauve.moe avatar

@simon Any clue why your tool with gpt4all's ggml-replit-code-v1-3b would perform worse than this replicate demo?

Is there a need to tweak the parameters for the model somewhere maybe?

https://replicate.com/replit/replit-code-v1-3b?prediction=zarihvjb2xfluvwsplgye4bude

drahardja, to llm
@drahardja@sfba.social avatar

Plagiarism for me, but not for thee

“OpenAI suspends ByteDance’s account after it used GPT to train its own AI model”

https://www.theverge.com/2023/12/15/24003542/openai-suspends-bytedances-account-after-it-used-gpt-to-train-its-own-ai-model

ai6yr, to ai
StuartGray, to llm

Thinking some more about using external plugins to aid & guide LLM output over longer generation, and had a crazy idea which might just be workable in Oobabooga;

A plugin that adjusts generation parameters based on a pre-defined list & inline commands.

It would work something like 3rd party tool calling with parameters, BUT on seeing [Tempo:+] in the output, generation is paused, generation params are adjusted, and then resume generation.

Not great, but def. do-able.

ramikrispin, to OpenAI
@ramikrispin@mstdn.social avatar

(1/3) OK, Sora is cool, but what are the long-term impacts and applications?

Like ChatGPT, the OpenAI's Sora that was released yesterday did not introduce anything new; it just took it to a new level.

video/mp4

ai6yr, to ai

Great, startup which has an AI "write stories based on your original ideas" and then sells that AI gobbeldygook to anyone who might be foolish enough to buy it (with video content licensing available! /s). https://techcrunch.com/2024/02/26/inkitt-ai-publishing-37-million/

kellogh, to LLMs
@kellogh@hachyderm.io avatar

i wish i knew more about comparing . anyone have resources? one thing i’ve wondered is how to convert an embedding from a “point” to an “area” or “volume”. e.g. an embedding of a 5 paragraph essay will occupy a single point in embedding space, but if you broke it down (e.g. by paragraph), there would be several points and the whole would presumably be at the center. is there a way to trace the full space a text occupies in space?

metin, (edited ) to ai
@metin@graphics.social avatar

Whenever I see OpenAI's Sam Altman with his pseudo-innocent glance, he always reminds me of Carter Burke from Aliens (1986), who deceived the entire spaceship crew in favor of his corporation, with the aim of getting rich by weaponizing a newly discovered intelligent lifeform.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • normalnudes
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • osvaldo12
  • ngwrru68w68
  • ethstaker
  • JUstTest
  • everett
  • Durango
  • Leos
  • cubers
  • mdbf
  • khanakhh
  • tester
  • modclub
  • cisconetworking
  • anitta
  • tacticalgear
  • provamag3
  • lostlight
  • All magazines