kellogh, to random
@kellogh@hachyderm.io avatar

imo the user experience of GitHub stinks. Generating code is one of the tasks I trust an least. I’d rather a chat interface so I can ask it to

  1. Refactor
  2. Generate files
  3. Move files
  4. Navigate
  5. Ask questions about code
  6. Understand a new code base

Sure, writing little bits of code is kinda cool, but also ehh 🤨 I’d rather just type it myself. Feels like a lot of opportunity left on the table

baldur, to random
@baldur@toot.cafe avatar

So, earlier today I wrote about how Google's Bard is in for a rude awakening because, according to their own researchers (now ex-googlers), large language models are impossible to secure

"Google Bard is a glorious reinvention of black-hat SEO spam and keyword-stuffing"

https://softwarecrisis.dev/letters/google-bard-seo/

HistoPol,
@HistoPol@mastodon.social avatar

@baldur

Idea of the day: the ! It works.

"...can be through their training data—both the data used in the initial training and fine-tuning...[it is possible to do] and degrade output with 👉as few as a hundred toxic entries,👈..."

B/c: "... Large AI models are bound to be 👉dangerous👈. Their rushed deployment, especially at..."

https://softwarecrisis.dev/letters/google-bard-seo/

arildsen, to ai
@arildsen@fosstodon.org avatar

I've got to be honest, I am not so keen on all this hype.
Now my son just figured out how to jailbreak and it's telling him how to make mustard gas 😵

ianrosewrites, to random

People in security and computing have been saying for years - there's no cloud. There's just someone else's computer.

Right now, there's no AI. There's just someone else's work.

Stop calling generative text and image programs AI. It's inaccurate and insulting. They are just the evolution of corporate creative theft that's been going on as long as media corporations have existed.

rwalker1501,
@rwalker1501@mastodon.online avatar

@ianrosewrites There may be some cases when you're right - we should look at them. But I think you go too far. When I answer a student question I try and use everything I've ever read on the subject, in books, papers, on the internet. Is this creative theft? I think not. It's the way humans operate. Building on the shoulders of giants (and everyone else). Not a bad thing I think.

runarcn, to ChatGPT

Wait what the fuck what now huh what??

Why did just make a completely unprompted joke???

Jigsaw_You, to ai Dutch
@Jigsaw_You@mastodon.nl avatar

-art generators are trained on enormous datasets, containing millions upon millions of copyrighted images, harvested without their creator’s knowledge, let alone compensation or consent. This is effectively the greatest art heist in history.”

@machinelearning

https://artisticinquiry.org/AI-Open-Letter

jace, to random
@jace@mstdn.ca avatar

All of the outrage over the redesigned Canadian passport is just more culture war bullshit, and our media is just feeeeeeeding into it. Tell me, how many times have you actually sat there and stared at the passport like it was some art book?
Seriously people, fuck right off with the bullshit that the rage farmers want to feed you.

josephby,

@KimberlyN @jace

The articles suggest that the redesign reflects a government that is self-hating, detached from the nation's identity, and focused on promoting a progressive agenda. They call for a better-designed passport that respects the country's history, heroes, and values. (2/2)

interfluidity, to random

This is a fun kind of ego surfing. In ChatGPT's world, I have been so prolific! I have written about evvvvrything!

In our world, of course, I've never written about cathedrals.

The pathetic thing is I felt compelled to Google the made-up article I haven't written. I mean, what if I forgot something? I know the machine bullshits, but my mind is fallible and my memory fails, maybe it is me who does not know my own work?

jsrailton, to ai
@jsrailton@mastodon.social avatar

deleted_by_author

  • Loading...
  • jsrailton,
    @jsrailton@mastodon.social avatar

    3/ The argument that the government has no resources to really understand is also false.

    The US gov, national labs, as well & major public universities have incredible levels of talent in AI.

    All available to help craft regulations without capture.

    It's also a familiar irony that an industry that has been mining public institutions for everyone with talent & trying to hire them away... is implying that there is no role for those institutions to play.

    kellogh, to ai
    @kellogh@hachyderm.io avatar

    My daughter attends a school. Something I’ve been thinking is that the Waldorf methodology is probably proof, in that the don’t rely on testing. It generally focuses on the whole person and fostering creativity, things we need for working with tech. It might be the education needed to live in an AI world

    frankel, to random
    @frankel@mastodon.top avatar

    StarCoder: A State-of-the-Art for https://huggingface.co/blog/starcoder

    vitriolix, to ai
    @vitriolix@mastodon.social avatar

    well that got dark quick

    seldoncrisis, to ai
    givemefoxes, to ai

    People in the know about and LLMs: is there an upper limit to what's possible with an , or can you just keep throwing more compute power at training to make them better?

    Jigsaw_You, to ai Dutch
    @Jigsaw_You@mastodon.nl avatar
    brianleroux, to random

    got access to gpt4 (gipitty four) and plugins

    what should I build?

    janriemer, to ai

    If you want to know the current state of and where we're heading, do not read this fantastic article on Quanta Magazine by Max G. Levy

    Chatbots Don’t Know What Stuff Isn’t:

    https://www.quantamagazine.org/ai-like-chatgpt-are-no-good-at-not-20230512/

    ErikJonker, to ai
    @ErikJonker@mastodon.social avatar

    Suppose i would like to measure the amount of bias, discrimination, hallucination etc in tools like Bard, Bing, ChatGPT and others. Are there already standards and tools to measure that ?
    There will be discussions whether model A is better/worse then model B, it would be nice to have some standards/benchmarks for evaluation ? 🤔

    stefan, to ChatGPT
    @stefan@stefanbohacek.online avatar

    "You can design all the neural networks you want, you can get all the researchers involved you want, but without labelers, you have no ChatGPT," [OpenAI contractor Alexej Savreux] added. "You have nothing."

    https://futurism.com/the-byte/chatgpt-15-hour-workers

    bwaber, to random
    @bwaber@hci.social avatar

    A beautiful day in Boston, and in anticipation of getting my walking boot off next Friday (🤞) I tried walking a bit on a dirt path with no issue and listening to talks for my ! Also I found some wild asparagus, which was absolutely delicious. (1/9)

    Wild asparagus growing in a grassy field

    bwaber,
    @bwaber@hci.social avatar

    Next was an interesting talk by @annargrs on focusing on understanding capabilities rather than vague notions of "understanding" at @sfiscience https://www.youtube.com/watch?v=ycCcWEuFE48 (5/9)

    ppatel, to ai
    @ppatel@mstdn.social avatar

    You know that internal memo leaked last week about open , here's an example of one of its points. Lots of things happening in the open source model space. I'm seeing something new every day.

    Allen Institute is working on a new open

    https://blog.allenai.org/announcing-ai2-olmo-an-open-language-model-made-by-scientists-for-scientists-ab761e4e9b76

    pseudonym, to ai

    Best user of / I've seen. De-sensationalize news stories.

    https://www.boringreport.org/app

    iamkale, to random

    I'm understanding that the latest version of Google Bard uses some new kind of AI model called PaLM v2. How novel is this compared to what's currently powering tools like ChatGPT? How might it compare to LLaMa-based open-source work?

    I think I read recently that a lot of the innovations in the AI space in the last decade or so came out of Google in some shape or form, but other companies productized the technology first. I wonder if Google's success with Bard might be due less to the underlying LLM technology and more to the breadth of other services that Google offers.

    But I also heard they have a slimmer version of the Bard model that can run on mobile, called "Gecko"? That's potentially exciting.

    What a wild time for technology, even if this is all statistical guessing behind the scenes.

    bitboxer, to random

    Step 1: Everyone uses to collect knowledge about how to do things with code
    Step 2: Use to create a bot that answers stuff based on this data
    Step 3: StackOverflow usage declines (already down 14%).
    Step 4: If StackOverflow closes, LLM do not have new data for new frameworks to learn answers to common questions from.

    Hooray, I guess?

    ianRobinson, to ai
    @ianRobinson@mastodon.social avatar

    Good grief. Elementor has integrated generative AI into their WordPress tools. It can do useful stuff like writing CSS for buttons from text descriptions of what you want the button to do.

    But it can also write blogs. Maybe we can get other LLMs to read the blogs, then we can get on with doing useful things 🤷🏻‍♂️

    https://youtu.be/_P_4SHBQtiQ

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • tacticalgear
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • Durango
  • cubers
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • ngwrru68w68
  • kavyap
  • GTA5RPClips
  • provamag3
  • ethstaker
  • InstantRegret
  • Leos
  • normalnudes
  • everett
  • khanakhh
  • osvaldo12
  • cisconetworking
  • modclub
  • anitta
  • tester
  • megavids
  • lostlight
  • All magazines