Because I live in the strange intersectional space where I can simultaneously be pining for fosdem and willing to stay in the US and fork over for a Apple Vision Pro, here's my first weekend impressions of the latter:
Quite a few people have been disappointed by the passthrough, eye-tracking and field-of-view, to the extent that big Apple fans are thinking of sending it back. I find them all fine, and I suspect this may be a combo of physiological variance or actual hardware issues/bugs. Some people say they can't see text in passthrough, for instance, and it seems fine to me.
The persona and eyesight (the googly eyes on the front) are clearly Not Very Good, but demos of things that /should/ be good in a version 2 or 3. The thing is clearly meant to be see-through glasses, with higher-res cameras, but they just didn't pull that off in the first gen. https://www.ifixit.com/News/90137/vision-pro-teardown-why-those-fake-eyes-look-so-weird goes into some of the tradeoffs that make the eyesight feature not quite come together.
I think the thing that may have made Apple pull the trigger on shipping is that the AVP v1.0 can throw around some emotional heft here: namely spatial videos and the 3D interactive elements. I jumped at the dinosaur, and spent far too long dreamily watching a 3D video I took of my family at Christmas. There's some cheesy humanity in this tech somewhere, and that's usually what Apple seizes on and tries to foster.
The UX feels work-y (or art-y or creative-y or do-something-that-isn't-just-computer-y). I am unsure if I could work for long in it, but it felt like a full-fledged user interface, in a way that the Quest/Steam UI doesn't really.
The main thing limiting it at the moment is that normal UI interactions can get a little ... intense. I brought up a Safari webpage and the combo of the bright colors, eye-tracking on lots of links, gave me real sensory overload. Maybe you adapt, but it felt like the rest of the interface -- which is quite muted and visually subtle, is toned down by comparison. We may think you want a million windows floating in 3D space, but I'm not sure that's tolerable, even for us.
I'm really fooling around with this to see what we need to gank for the Free and Open World. i don't think there's anything utterly unobtainable here, apart ofc from the hardware, but that will slowly trickle down into the open hardware land. I think it's noteworthy that Apple chose to go the iOS/iPadOS locked down app route, rather than making this a desktop Mac alternative. I think you could build a competive, unrestricted, seize-the-means-of-computation version of this someday.
My instinct, as ever, is to treat these initial forays by well-capitalized corps -- just like AT&T Unix, the early OS X laptops, the iphone, the ipad, the watch, GPT -- like palantirs. Good for seeing a little bit into the future to know where we're heading, but don't get too comfortable in the world they are building, and make sure you're not giving more info to it, than you are taking from it...
@boris@danny ahhh the gazillion dollar question! Chasing taillights is a proven strategy and gets you a long way, and I deeply think you and all the other smartest people's strat of grabbing and running with the research that the big tech companies miss is the way to get ahead. But I still think, as you know, we need new institutions. Ink & Switch, Fission, Dynamicland, Spritely, the beautiful genius reading this post right now, and I dunno the support of the Wakandan govt
I just asked a question about fraud, which is something that's really tricky when you're paying a stranger $$$ based on how much they claim they are distributing content. Reasonable answer about the parallels with clickfraud, and how non-zero levels of fraud are ok, but the long-term answer is just aligning people so that the distance between the person distributing the content, and a person paying to receive it, doesn't have room for fraud. It's still a hard problem though!
So this is a demo of the idea of content-addressed computation -- you take a hash of a WASM function, a hash of your inputs, and you get a hash of the output. You only need to run a function with the inputs once, because results are cached across the whole network. Inputs and outputs are grabbable using content-addressing, because if you have the hash, you can find the file! #everywherecomputer
Now I'm sitting with Juan Benet @dietrich , @boris , Megan Klimen and others listening to @robin setting the scene for a discussion of #ipfs governance and nurturing the ecosystem
this was a great discussion, we talked about tooling (and ended up exploring some convos about IPFS chat https://tinyurl.com/ipfschat on https://pol.is/ a tool that we're building on in the #filecoin universe too), gardening and documentation, oral cultures in a zoom era, Coasean floors and teasing out the elemental parts of a governance organization)
Finally, a fishbowl session where we all agonize about how to make #IPFS mainstream. https://en.wikipedia.org/wiki/Fishbowl_(conversation) -- I've never done this before! I took one for the team and relayed what users tell me about #IPFS first impressions-- that it takes a lot of CPU and network connections. I think it's actually just an issue of Kubo's defaults, but it's the number one papercut I hear about.