kellogh,
@kellogh@hachyderm.io avatar

This is great! We need to be explainable and understandable. But a few problems

  1. Billions of parameters
  2. Nobody knows what an explanation is

https://openai.com/research/language-models-can-explain-neurons-in-language-models

russelsteapot42,

@kellogh

It's kinda like having just gained the ability to read the genetic code, but not yet knowing what anything actually codes for.

They may have opened the black box, but now they've got their work cut out for them.

kellogh,
@kellogh@hachyderm.io avatar

@russelsteapot42 exactly!

kellogh,
@kellogh@hachyderm.io avatar

When you have billions of anything, most things get difficult. Doesn't really matter what you're talking about. Explanations are very tough as it is, it's going to be very tough to give people what they're looking for.

More on that — Most / explanation algorithms have suffered from people misusing them, to mean things they were never meant to mean

kellogh,
@kellogh@hachyderm.io avatar

Last, most people don't know what an explanation is. Its easy to talk about informally, but it's REALLY hard to define precisely. This 60 page paper gives an overview of all the things we mean when we talk about "explanation", all humanities research. It's worth a read if you want to understand how complex we really are https://arxiv.org/abs/1706.07269

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • tacticalgear
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • khanakhh
  • Youngstown
  • ngwrru68w68
  • slotface
  • everett
  • rosin
  • thenastyranch
  • kavyap
  • GTA5RPClips
  • cisconetworking
  • JUstTest
  • normalnudes
  • osvaldo12
  • ethstaker
  • mdbf
  • modclub
  • Durango
  • tester
  • provamag3
  • cubers
  • Leos
  • anitta
  • megavids
  • lostlight
  • All magazines