chrash0

@chrash0@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

chrash0,

tbh this research has been ongoing for a while. this guy has been working on this problem for years in his homelab. it’s also known that this could be a step toward better efficiency.

this definitely doesn’t spell the end of digital electronics. at the end of the day, we’re still going to want light switches, and it’s not practical to have a butter spreading robot that can experience an existential crisis. neural networks, both organic and artificial, perform more or less the same function: given some input, predict an output and attempt to learn from that outcome. the neat part is when you pile on a trillion of them, you get a being that can adapt to scenarios it’s not familiar with efficiently.

you’ll notice they’re not advertising any experimental results with regard to prediction benchmarks. that’s because 1) this actually isn’t large scale enough to compete with state of the art ANNs, 2) the relatively low resolution (16 bit) means inputs and outputs will be simple, and 3) this is more of a SaaS product than an introduction to organic computing as a concept.

it looks like a neat API if you want to start messing with these concepts without having to build a lab.

Why Is There an AI Hype? | The Luddite (theluddite.org)

Companies are training LLMs on all the data that they can find, but this data is not the world, but discourse about the world. The rank-and-file developers at these companies, in their naivete, do not see that distinction…So, as these LLMs become increasingly but asymptotically fluent, tantalizingly close to accuracy but...

chrash0,

this data is not the world

i think most ML researchers are aware that the data isn’t perfect, but, crucially, it exists in a digestible form.

chrash0,

as big as the circle jerk is here against AI, i think it’s on the whole a good thing if we use it for what it’s actually good at: approximating an answer. but once companies start promising things like security that require 100% accuracy they totally lose me. as someone who has worked on recognition systems i will be opting out so fast to things like facial scan at PoS. it’s not AI because it’s not actually intelligent. you can’t reason with it or change its mind without rigorous training. write some shitty code for me to fix? fine. buy a TV with whatever bs contractor bid the lowest for the facial scanning job? gtfo. startup founders, executives, and managers will promise the moon when they’re so far up their own ass they’ve never even seen it.

chrash0,

there are language models that are quite feasible to run locally for easier tasks like this. “local” rules out both ChatGPT and Co-pilot since those models are enormous. AI generally means machine learned neural networks these days, even if a pile of if-else used to pass in the past.

not sure how they’re going to handle low-resource machines, but as far as AI integrations go this one is rather tame

chrash0,

i mean, i’ve worked in neural networks for embedded systems, and it’s definitely possible. i share you skepticism about overhead, but i’ll eat my shoes if it isn’t opt in

chrash0,

if it’s easier to pay, people spend more

chrash0,

IBM then. or, i don’t know, the British Royal Family?

the reality of talking about extremist economics is no one knows how it would work out in the long term. but regardless, if it happened tomorrow we already have a Microsoft to deal with.

“taxation is theft” “wage labour is exploitation”

sometimes things are subtle and complicated and can’t be practically boiled down to absolutes.

chrash0,

honestly 8 space indents always felt a bit ridiculous to me. i usually use 4 since it’s more conventional in most languages but could also be happy with 2.

weird hill to die on. use default setting unless you have a good reason not to. the argument itself is a waste of time on projects that want to get things done.

chrash0,

gotem!

seriously tho, you don’t think OpenAI is tracking this? architecural improvements and training strategies are developing all the time

chrash0,

“we don’t know how” != “it’s not possible”

i think OpenAI more than anyone knows the challenges with scaling data and training. anyone working on AI knows the line: “a baby can learn to recognize elephants from a single instance”. reducing training data and time is fundamental to advancement. don’t get me wrong, it’s great to put numbers to these things. i just don’t think this paper is super groundbreaking or profound. a bit clickbaity and sensational for Computerphile

chrash0, (edited )

i really want to like Nix.

gave it a shot a few years ago, but i felt like documentation and community support wasn’t really there yet. this was long before Nix surpassed Arch in terms of number of available packages. now people still complain about documentation, especially of the Nix language. i see a lot of package authors using it, and that kind of tempts me to start using at least the package manager. but a lot of packages don’t. the allure of GitOpsing my entire OS is very tempting, but then there’s been these rumors (now confirmed) of new forks, while Guix splintered off much earlier. for something that’s ostensibly supposed to be the most stable OS, that makes me nervous. it also seems to have some nontrivial overhead—building packages, retaining old packages, etc.

the pitch for Nix is really appealing, but with so much uncertainty it’s hard to pull the trigger on migrating anything. heck, if i could pull off some PoCs, i think my enterprise job might consider adopting it, but it’s a hard recommend for me today as it was 5 years ago.

Rabbit R1 is Just an Android App (lemmy.world)

See, it turns out that the Rabbit R1 seems to run Android under the hood and the entire interface users interact with is powered by a single Android app. A tipster shared the Rabbit R1’s launcher APK with us, and with a bit of tinkering, we managed to install it on an Android phone, specifically a Pixel 6a....

chrash0,

what else would it be? it’s a pretty common embedded target. dev kits from Qualcomm come with Android and use the Android bootloader and debug protocols at the very least.

nobody is out here running a plain Linux kernel and maintaining a UI stack while AOSP exists. would be a foolish waste of time for companies like Rabbit to use anything else imo.

to say it’s “just an Android device” is both true and a mischaracterization. it’s likely got a lot in common with a smartphone, but they’ve made modifications and aren’t supporting app stores or sideloading. doesn’t mean you can’t do it, just don’t be surprised when it doesn’t work 1-1

chrash0,

i didn’t think people would really be surprised. but maybe i’m jaded by my experience in the industry.

if we’re arguing whether or not it’s objectively stupid, i think that’s up to the market to decide.

kinda seems like a toy to me anyway, and it’s kind of priced that way

chrash0,

it’s not a password; it’s closer to a username.

but realistically it’s not in my personal threat model to be ready to get tied down and forced to unlock my phone. everyone with windows on their house should know that security is mostly about how far an adversary is willing to go to try to steal from you.

personally, i like the natural daylight, and i’m not paranoid enough to brick up my windows just because it’s a potential ingress.

chrash0,

it’s an analogy that applies to me. tldr worrying about having my identity stolen via physical access to my phone isn’t part of my threat model. i live in a safe city, and i don’t have anything the police could find to incriminate me. everyone is going to have a different threat model. some people need to brick up their windows

chrash0,

like i said, it’s more of a username than a password

chrash0,

i used fish as a login shell for years before i switched to nushell. it’s not super risky. bash is just as capable of having a system breaking config. worst case login as root with a minimal bash config and fix it up from there. if that’s fucked you need a rescue ISO.

chrash0,

seems like chip designers are being a lot more conservative from a design perspective. NPUs are generally a shitton of 8 bit registers with optimized matrix multiplication. the “AI” that’s important isn’t the stuff in the news or the startups; it’s the things that we’re already taking for granted. speech to text, text to speech, semantic analysis, image processing, semantic search, etc, etc. sure there’s a drive to put larger language models or image generation models on embedded devices, but a lot of these applications are battle tested and would be missed or hampered if that hardware wasn’t there. “AI” is a buzz word and a goalpost that moves at 90 mph. machine learning and the hardware and software ecosystem that’s developed over the past 15 or so years more or less quietly in the background (at least compared to ChatGPT) are revolutionary tech that will be with us for a while.

blockchain currency never made sense to me from a UX or ROI perspective. they were designed to be more power hungry as adoption took off, and power and compute optimizations were always conjecture. the way wallets are handled and how privacy was barely a concern was never going to fly with the masses. pile on that finance is just a trash profession that requires goggles that turn every person and thing into an evaluated commodity, and you have a recipe for a grift economy.

a lot of startups will fail, but “AI” isn’t going anywhere. it’s been around as long as computers have. i think we’re going to see a similarly (to chip designers) cautious approach from companies like Google and Apple, as more semantic search, image editing, and conversation bot advancements start to make their way to the edge.

chrash0,

you’d be surprised how fast a model can be if you narrow the scope, quantize, and target specific hardware, like the AI hardware features they’re announcing.

not a 1-1, but a quantized Mistral 7B runs at ~35 tokens/sec on my M2. that’s not even as optimized as it could be. it can write simple scripts and do some decent writing prompts.

they could get really narrow in scope (super simple RAG, limited responses, etc), quantize down to even something like 4 bit, and run it on custom accelerated hardware. it doesn’t have to reproduce Shakespeare, but i can imagine a PoC that runs circles around Siri in semantic understanding and generated responses. being able to reach out on Slack to the engineers that built the NPU stack ain’t bad neither.

EmpeRohr, to dnd
@EmpeRohr@social.tchncs.de avatar

Did you ever experience TPK?

What is the story behind?
@dnd

chrash0,

gotta pull out the ole “win the fight; lose the cutscene”

chrash0,

it’s not the only problem listed here, and they’re pretty explicit that pickle files are known to be insecure. however, Huggingface isn’t being negligent by allowing them. somewhat ironically, it’s tough to get ML engineers/researchers to try anything they didn’t learn first. Huggingface themselves makes safetensors which is a more secure open weights format, but there are also competing standards in this space and many stubborn and apathetic devs will stick with pickle cuz it’s easy. it’s a tough problem for HF, but i understand why they do it this way.

in a previous job i asked that we not use pickle files either in trying new models or internally distributing models, and they didn’t see the point. this a wider cultural problem, and HF is just trying to capitalize on that market of dumb dumb ML researchers

chrash0,

they likely aren’t creating the model themselves. the faces are probably all the same AI girl you see everywhere. you gotta be careful with open weight models because the open source image gen community has a… proclivity for porn. there’s not a “function” per se for porn. the may be doing some preprompting or maybe “swim with the sharks” is just too vague of a prompt and the model was just tuned on this kind of stuff. you can add an evaluation network to the end to basically ask “is this porn/violent/disturbing”, but that needs to be tuned as well. most likely it’s even dumber than that where the contractor just subcontracted the whole AI piece and packages it for this use case

chrash0, (edited )

i’ve been daily driving nushell for about 6 months and it’s been great for the most part. the downsides are 90% regular breaking changes (big breaking changes just dropped today that i’ll have to migrate) and 10% translating scripts or commands from bash.

it can really make you feel like a wizard the first time you bang out a pipeline to change some data in a JSON file.

the only thing i might mildly disagree with is the sentiment that we need community buy-in. sure it would be nice if the project had more eyes on it, but i’m not trying to convince my company to adopt nushell. unlike TypeScript or Rust i don’t have to inconvenience anyone by introducing nushell to my workflow. you can just start using it. and i’d recommend it to basically anyone who isn’t brand new to shells. but it doesn’t hurt my feelings one bit if my coworkers don’t see the appeal

chrash0,

depends on how entrenched you are in bash/POSIX, but it’s a fairly simple language to learn.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • thenastyranch
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • ngwrru68w68
  • provamag3
  • magazineikmin
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • JUstTest
  • All magazines