lobrien

@lobrien@sigmoid.social

A writer who programs. Formerly, a programmer who wrote. ML for Earth. Formerly: Azure ML, Xamarin, Gemini Observatory, magazines (AI Expert, Game Developer, Computer Language), etc.

#ML #Hawaii #MLOps #Pytorch #MetricLearning #FastAI #MarineBio #TTRPG #Blackwater #UnderwaterPhotography Searchable on #tfr

This profile is from a federated server and may be incomplete. Browse more on the original instance.

kellogh, to random
@kellogh@hachyderm.io avatar

hot take, and this really is hot: i like the apple vision headset. not sure i'd personally pay the price tag, but i could imagine it becoming an indispensable part of wfh or office settings. more than anything, i'm blown away by the huge effort that went into getting all the details right — a custom-built r1 chip, "invisible" headsets, new os... it's the first ar/vr device that actually seems like it's not just a demo

lobrien,

@kellogh My best guess (but I do not play 5D chess!): Does this become the new display model for the digitally-immersed worker? A headset vs 3 monitors?

interfluidity, to random

a thing i don’t get is what is new. i mean, computers have long been much, much “smarter” than humans in, for example, their ability to perform arithmetic, or to remember things. recent AI tools are interesting for sure, but what superior competence of theirs makes these new systems so threatening, compared to older superior competences?

lobrien,

@interfluidity I think the valid issue is “capability overhang”: the LLMs were designed for translation, but have proved to be capable across a surprising variety of tasks (programming, mathematical reasoning, etc.). With “the bitter lesson,” that “more data, more computation,” seems to often trump the best-laid domain-specific strategies, the feeling is that we may shortly face superior competency across a huge surface area and it’s reasonable to expect significant societal disruption.

lobrien,

@interfluidity I think those are important questions! I think there are good arguments that apocalyptic talk is distracting (maybe even deliberately!) from current harms and biases.

lobrien,

@interfluidity @pluralistic A year ago, it felt like the industry was focusing on immediate, demonstrable problems such as you describe. Now, "responsible AI" teams focused on that sort of thing have been fired, and science fiction apocalypses get more attention than the "it must be fair, the machine suggested it," laundering you talk about. It does seem oddly convenient to the power status quo.

merott, to random

ChatGPT is a fantastic code reviewer.

GPT-4's code reviews alone are worth the $20 monthly fee.

lobrien,

@merott Do you have a boilerplate prompt you could share or do you do it ad hoc? ("Is the following code clear? Does the following code follow good coding practices?" Etc)

lobrien, to random

Is anyone working on incorporating into ? There was a brief period in the early 90s where “” aimed for relevance, but nets were still shallow and fuzzy logic fizzled (AI Winter? Used but no longer called AI? Not competitive with existing control system formalities?). My is that FL feels like it’s differentiable (continuous truth/belief values). If true, might be suited for integration with statistical machine learning.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • InstantRegret
  • mdbf
  • ngwrru68w68
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • osvaldo12
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • JUstTest
  • tacticalgear
  • ethstaker
  • provamag3
  • cisconetworking
  • tester
  • GTA5RPClips
  • cubers
  • everett
  • modclub
  • megavids
  • normalnudes
  • Leos
  • lostlight
  • All magazines