Acemoglu has written a 546-page treatise that demolishes the Church of Technology, demonstrating how innovation often winds up being harmful to society.
I can only start to imagine how they filled that many pages with… text.
Example: .>A small number of people are going to be on top — they’re going to design and use those technologies — and a very large number of people will only have marginal jobs, or not very meaningful jobs." The result, he fears, is a future of lower wages for most of us.
How is that different to… ever in history? It is a big load of words without meaning.
Damn, Emad was one of the only reasons they felt trustworthy to me. I moderated a discord community that he was part of and he felt like the epitome of show your work as a person. From what I could tell he was always very transparent with his work and how and what he wanted it to accomplish.
I expect a significant change in direction for the worse, unfortunately. That said, I hope this is something that is best for him rather than an outsting, and that he can find work in the future.
This is an impressive demo, it seems like just making training multimodal models approaches the appearance of symbolic understanding. It's a surprise how effective this can be.
Isn’t it the whole point that you don’t just learn specific actions but the underlying concepts? You don’t just learn how to use a calculator, you also learn what the actual math behind is, specifically such that the tool you use to calculate is irrelevant.
The headline is like expecting someone with knowledge of a 10 year old to be able to work efficiently. What a load of nonsense, just like the other anti-AI article earlier.
We’re still waiting on cold fusion and all the graphene miracles. I think AGI is a lonnnng (millenia) way off. Most of these companies say this stuff to get funding, but they know what they call AI is just a very complex parrot. And not the intelligent parts, just the mimicking ones. Nowhere close to AGI.
“I approach all of this from a place of optimism. The reason I do tech criticism is because of the belief that things can be better. And if we look at all kinds of past crises, things worked out in the end, but that’s because people worried about them at key moments.” I like the place the article is coming from, and the importance of distinguishing between predictive and generative ID. Of course, the click-bait title (probably crafted by an editor) contains none of the nuance that is present in the article and the interview, but hey, it’s the internet.
Apparently Inflection AI have bought 22,000 H100 GPUs. The H100 has approximately 4x the compute for transformers as the A100. GPT4 is rumored to be 10x larger than GPT3. GPT3 takes approximately 34 days to train on 1024 A100 GPUs.
So with 22,000*4/1024=85.9375x more compute, they could easily do 10x GPT4 size in 1-2 months. Getting to 100x the size would be feasible but likely they're banking on the claimed speedup of 3x from FlashAttention-2, which would result in about 6 months of training.
It's crazy that these scales and timelines seem plausible.
Nobody should expect any particular species to stick around forever. There was always going to someday be something that replaces humans, by whatever definition of "human" and "replacement" you might be using.
My main hope is just that whatever eventually replaces us is worthy of the title. And ideally has some fondness for their evolutionary ancestors.
The AI Community On Kbin
Hot