pup_atlas

@pup_atlas@pawb.social

This profile is from a federated server and may be incomplete. Browse more on the original instance.

pup_atlas,

I wholeheartedly disagree. I have more trained ears then most (worked in video production), but not by much, and when i got my AirPods Max, I thought they sounded awful at first. They were crunchy and dithered sounding in this weird way. I was gonna return them, but I did some testing, and discovered that I was hearing Spotify compression. I turned up the quality as high as it would go in the settings, and that made it a little bit better, but I could still hear it, and can to this day. I did some further testing by signing up for a tidal free trial, in addition to Apple Music. Listening in lossless was an entirely different experience, I could definitely tell the two apart blindly, without even specifically looking for sound quality. There were like 2 to 3 instruments in a given song that I wouldn’t be able to pick out in the lower quality audio, that I could easily pick out in the lossless audio. You have to have a pretty decent pair of headphones to be able to hear it, but some of the higher and consumer stuff can definitely hit that level, and when you do, it’s not something you have to go looking for, it sounds very obvious.

pup_atlas,

There may be other factors at play, Apple quite likes to compress stream data between their own devices, even on “standard” protocols (just look at their monitor collaboration with LG where they did the same thing to exceed the max resolution of an existing display signal). Regardless, there is a difference, and it is not a small one. It was immediately obvious to me after listening to a single song. Something about the pipeline is crunching audio to the level where it’s obviously degraded. This isn’t audiophile grade splitting hairs and “I think it sounds ever so slightly better with these gold cables” it was like the difference between 480p and 1080p video to me, enough to be actually annoying during normal listening, even if I was actively trying to forget about it.

pup_atlas,

Rooted, degoogled AOSP is definitively not “mainstream”. Mainstream to me means something you can but off the shelf and start using without having to modify it.

pup_atlas,

By doing it that way, you are all the sudden generating tens, if not hundreds of requests per minute to grab notifications for every platform and service, rather than just the one. With a unified approach, the phone can wake up in the background every 5 minutes and ping Google to ask for notifications. If everyone did it individually, your phone would never be able to go to sleep, and would CONSTANTLY be sending out requests to random servers. That also brings up security concerns, since you can get a vague idea of location data from a request, any app that can send notifs can soft track users. They would also open the door for one to be compromised, and send malicious info much easier than it would be to do thru Google. All around, its just a worse solution to the problem with one very small benefit.

pup_atlas,

Perhaps YouTube premium subscribers would have standing as a class action, since Google is materially worsening the experience of a paid product if you don’t use their browser

pup_atlas,

Perhaps not, but it’s not just the act of making the service worse, it’s doing so measurably to paying customers ONLY when using a competitors product. With those caveats, I think you could at least argue standing. Winning is a whole other battle.

pup_atlas,

In the article it mentions that the service is run by sunbird. Just by reading their FAQ it doesn’t actually sound like they are MITM’ing messages via some mac server somewhere. It actually sounds more plausible to me that they are doing all the magic “on device”. They specifically mention that this won’t work on multiple phones at the same time, that’s what’s tipping me off.

What I suspect is happening is that the phone itself is spoofing an actual iPhone, and connecting to Apple servers as if it is one. Normally you wouldn’t be able to do this, Apple sells the phones, so they know all the serial numbers that should be able to access iMessage, and would be able to block anything that doesn’t report to be a real iPhone. What I think may be happening is that sunbird could be buying up pallets of dead, old, or otherwise unusable iPhones for pennies on the dollar, and using those serial numbers to pretend they were an iPhone from another device (like the nothing phone) directly.

This would make sense with their business model, according to their FAQ they have “no reason to charge money” for their product yet. Buying access to iMessage for a few bucks upfront with no ongoing cost would match up with what they are claiming, and it would be extremely hard for Apple to detect on their end, as they would appear to be all sorts of models, bought at different times, in different places, and signed in by real people.

I want to reiterate that this is pure speculation on my part, it’s just a theory. Which this would mean that (in theory) chats could (and would) be E2E encrypted from sender to receiver, ultimately it’s still Nothing/Sunbird’s app, so they could be doing anything with it on device.

pup_atlas,

They claim it in the article, and in a few other publications, but I haven’t seen anything that explicitly confirms, from sunbird, that this is the case, including on their website. They also make claims on their website that conflict with that architecture, as I don’t believe it would be possible to E2E encrypt messages like they claim they do. I kinda wonder if the Mac Mini claim is an assumption that everyone just ran with, without confirming that it’s true. I could be wrong though, I’ll gladly eat my words if anyone has a primary source to cite, but that architecture and business model just doesn’t appear to be compatible with their claims.

pup_atlas,

Yup, those clauses should be illegal. Especially nowdays. In the past, you could cross out sections you don’t agree too before signing, or negotiate the contract somehow. But now, contracts are very clearly designed as a “take it or leave it” offer, and when literally everyone in a given marketplace for an essential product or service has the same clauses, it effectively removes your rights.

pup_atlas,

This doesn’t sound like censorship, it sounds like they were getting legal threats directly levied at their volunteer team. I can understand the desire to protect yourself against getting sued for your (admittedly large) side project. It sounds like they are working on it in good faith though.

pup_atlas,

Firstly, let me be clear. This is my opinion as someone who isn’t, and has never been involved with their instance in any capacity. My account is with pawb.social.

Posting legal letters may open them up to additional legal liability, and it is completely reasonable to let the community know what happened without sharing the actual letter.

Secondly, you’re putting words in my mouth. I am not saying they are censoring in good faith. I am saying the complete opposite, that they are reversing their stance in good faith. In other words, they are uncensoring content in good faith.

Tesla trial over Autopilot fatality kicks off in California (www.reuters.com)

SAN FRANCISCO, Sept 28 (Reuters) - Opening statements are set to begin on Thursday in the first U.S. trial over allegations that Tesla’s (TSLA.O) Autopilot driver assistant feature led to a death, and its results could help shape similar cases across the country....

pup_atlas,

My guy, the feature is literally named “Autopilot”. By definition they are advertising it as a system that can PILOT the car AUTOMATICALLY. It doesn’t matter what they put in the fine print, they are blatantly describing the system as autonomous.

pup_atlas,

I would love to switch, and I tried to the other day, but I discovered that Firefox still doesn’t support integrated WebAuthn tokens (I.E. using Touch ID in lieu of YubiKeys). That is (unfortunately) a non-starter for me, as I use that technology everywhere, and I’m not intentionally weakening my security posture to switch. I’m honestly really surprised to find this feature disparity, as this feature has been generally available elsewhere for years. I’m a developer, so maybe I’ll take a crack at implementing it myself sometime, but it’s a big enough deal that I genuinely can’t switch yet :(

pup_atlas,

I would argue this level of delay is a miscarriage of justice. Actions this malicious could easily put companies out of business full years before a trial would even start. Where is a fair distinction between “slow/thorough” and “delay until the problem goes away”. There’s a non-zero chance some of the perpetrators will literally die before ever facing justice. How is that fair to the plaintiff?

pup_atlas,

Because it’s the last holdout. I use mostly apple devices, and lightning is pretty much the only device I have that uses something other than usb c to charge. It would also make the “hey does anyone have a charger?” Question at parties or work a lot more tolerable.

pup_atlas,

I have another point that I don’t see talked about a lot that I would like to consider. Their XDR Pro model is targeting actual professionals in the video field. Unlike pretty much everything that Apple makes, this monitor is comparable (and downright cheap) when compared side by side with other industry standard color calibrated video monitors. Professional grade video equipment has always been super expensive, and it’s not just an Apple thing. For example, here’s a Sony model commonly used in live broadcast. Same size, but the Apple monitor is actually 1.6X as bright for HDR, it’s higher resolution, and less than half the price. The only downsides being no SDI input, but it can still be used for post-processing just fine, or even live with a converter box. It also informs why the stand for the monitor doesn’t come standard, and is expensive as hell— it’s because they don’t expect anyone in their target market to buy it. They expect most of these monitors to be installed as a drop in replacement in color grading workflows or broadcast trucks, which are all pretty much fully vesa mounted already anyway.

In that context, their XDR Pro monitor makes perfect sense. On a cost basis alone, Apple’s monitor offering is very competitive for the professional video demographic they’re targeting. It’s not for the average power user, it’s for people whose literal sole job rides on colors being accurate.

As for the polishing clothes, yes they’re expensive when purchased separately, but they come “for free” in the box. I would rather they sell them separately than not at all, but the screen really doesn’t require anything special, just any old microfiber cloth should be fine, as long as the cloth is kept clean. Even that markup isn’t insane IMO, it appears to only be a 5-10$ markup on an accessory of a monitor they expect to be very low volume.

Overall, I think the product is just misunderstood more than anything. I don’t think it’s being advertised wrong, I think Apple just has such a proclivity to advertise their other products wrong that people’s expectations aren’t set correctly for when they are actually addressing the actual professional market (cough cough the iPhone PRO, a product that isn’t really a “professional” product in any sense of the word). These are just what professional grade products cost. Sure it’s expensive, but that’s what they have to cost to make these devices viable for any business to manufacture. The combination of low volume, high cost for components with a better than average precision, and pro grade calibration means that they just plain cost a lot more to make.

Equipment like this and the Sony monitor above are used in environments where they just need to work EVERY single time, and there is 0 room for failure. As an example, running shading (color grading) on a live broadcast, think events like the super bowl. Using any old monitor, you may not be able to tell the coke ad you are cueing up is going through your shading workflow, and their red branding is slightly off-color. That could easily be a million+ dollar mistake, I’ve seen similar things actually happen in the field (with other advertisers I will not mention for my own sake). Or god forbid you loose picture entirely. I’ve been in similar positions, and broadcast engineers/companies will pay any amount of money to make sure their equipment is top of the line, and won’t fail ever. If you don’t believe me, take a look at some other pro grade video gear, like a grass valley kayenne. The scale of money is simply, different with pro video equipment.

pup_atlas,

It’s worth noting that wireless transfer does not mean “cloud storage”. It can, and often does, but it is also easy to wirelessly back up things like photos entirely locally. With most prebuilt NAS units, all you have to do is buy something like a synology, some of them even come prefilled with hard drives, and go through the wizard in the app. That’s it, and the app will wirelessly, automatically back up things like pictures to your own locally controlled storage. I’m pretty sure you can do it natively with Time Machine too if you really wanted.

pup_atlas,

I work in tech, and I’m still using Chrome. I don’t like it, and I know a lot of other tech people are in the same boat, but I can’t just switch. That’s what I’m working towards, but the amount of tooling we use every day that depends on specifically Chrome is, significant to say the least. This is tooling we built internally to help ourselves, that depends on Chrome-specific APIs that are either different, or do mot yet exist in Firefox.

We’re working to port this stuff over to Firefox, but that takes time, and not everyone can just drop what they are doing to reimplement the tooling they already have in a different browser. On top of userspace tooling, we also have tens of thousands of unit tests based in some part on chrome (through tools like jest and puppet) to validate certain aspects of massive distributed web platforms that cannot easily be unit tested in normal code (though we have high coverage where we can). These also need to be ported, and are VERY specific to Chrome (or Chromium in some cases) in particular. We’re talking entire teams of people, and tens of thousands of man hours.

A lot of users truly can just switch at the drop of a hat. The UI switch is annoying sure, but its doable. For a lot of users in the tech space though, it’s just not feasible to drop Chrome overnight. We’ve started the process to be clear, but it’s going to be a very long transition.

pup_atlas,

That’s not what’s happening though, they are using that data to train their AI models, which pretty irreparably embeds identifiable aspects of it into their model. The only way to remove that data from the model would be an incredibly costly retrain. It’s not literally embedded verbatim anywhere, but it’s almost as if you took an image of a book. The data is definitely different, but if you read it (i.e. make the right prompts, or enough of them), there’s the potential to get parts of the original data back.

pup_atlas,

I’m aware the model doesn’t literally contain the training data, but for many models and applications, the training data is by nature small enough, and the application is restrictive enough that it is trivial to get even snippets of almost verbatim training data back out.

One of the primary models I work on involves code generation, and in those applications we’ve actually observed verbatim code being output by the model from the training data, even if there’s a fair amount of training data it’s been trained on. This has spurred concerns about license violation on open source code that was trained on.

There’s also the concept of less verbatim, but more “copied” style. Sure making a movie in the style of Wes Anderson is legitimate artistic expression, but what about a graphic designer making a logo in the “style of McDonalds”? The law is intentionally pretty murky in this department, with even some colors being trademarked for certain categories in the states. There’s not a clear line here, and LLMs are well positioned to challenge what we have on the books already. IMO this is not an AI problem, it’s a legal one that AI just happens to exacerbate.

pup_atlas,

The law is (in an ideal world), the reflection of our collective morality. It is supposed to dictate what is “right” and “wrong”. That said— I see too many folks believing that it works the other way too, that what is illegal must be wrong, and what is legal must be ok. This is (decisively) not the case.

In AI terms, I do believe some of the things that LLMs and the companies behind them are doing now may turn out to be illegal under certain interpretations of the law. But further, I think a lot of the things companies are doing to train these models are seen as “immoral” (me included), and that the law should be changed to reflect that.

Sure that may mean that “stuff these companies are doing now is legal”, but that doesn’t mean we don’t have the right to be upset about it. Tons of stuff large corporations have done was fully legal until public outcry forced the government to legislate against it. The first step in many laws being passed is the public demonstrating a vested interest in it. I believe the same is happening here.

pup_atlas,

I’m out and about today, so apologies if my responses don’t contain the level of detail I’d like; As for the law being collective morality, all sorts of religious prohibitions and moral scares HAVE ended up in the law. The idea is that the “collective” is large enough to dispel any niche restrictive beliefs. Whether or not you agree with that strategy aside, that is how I believe the current system works in an ideal sense (even if it works differently in practice), that’s what it is designed to protect from my perspective.

As for anti-AI artists, let me pose a situation for you to illustrate my perspective. As a prerequisite for this situation, a large part of a lawsuit, and the ability to advocate for a law is based on standing, the idea that you personally, or a group you represent has been directly, tangibly harmed by the thing you are trying to restrict. Here is the situation:

I am a furry, and a LARGE part of the fandom is based on art and artists. A core furry experience is getting art commissioned of your character from other artists. It’s commonplace for all these artists to have a very specific, identifiable signature style, so much so that it is trivial for me and other furs to be able to identify artists by their work alone at just a glance. Many of these artists have shifted to making their living full time off of creating art. With the advent of some new generational models, it is now possible to train a model exclusively off of one singular artists style, and generate art indistinguishable from the real thing without ever contacting them. This puts their livelihood directly at risk, and also muddies the waters in terms of subject matter, and what they support. Without laws regulating training, this could take away their livelihood, or even give a (very convincing, and hard to disprove) impression that they support things they don’t, like making art involving political parties, or illegal activities, which I have seen happen already. This almost approaches defamation in my opinion.

One argument you could make is that this is similar to the invention of photography, which may have directly threatened the work of painters. And while there are some comparisons you could draw from that situation, photography didn’t fundamentally replace their work verbatim, it merely provided an alternative that filled a similar role. This situation is distinct because in many cases, it’s not possible, or at least immediately apparent which pieces are authentic, or not. That is a VERY large problem the law needs to solve as soon as possible.

Further, I believe the same, or similar problems exist in LLMs, like they do in the situation involving generative image models above. Sure with enough training, those issues are lessened in impact, but where is the line of what is ok and what isn’t? Ultimately the models themselves don’t contain any copyrighted content, but they (by design) combine related ideas and patterns found in the training data, in a way that will always approximate it, depending on the depth of training data. While “overfitting” might be considered a negative in the industry, it’s still a possibility, and until there is some sort of regulations establishing the fitness of commercially available LLMs, I can envision situations in which management would cut training short once it’s “good enough”, leaving overfitting issues in place.

Lastly, with respect, I’d like to push back on both the notion that I’d like to ban AI or LLMs, as well as the notion that I’m not educated enough on the subject to adequately debate regulations on it. Both are untrue. I’m very much in favor of developing the technology, and exploring all it’s applications. It’s revolutionary, and worthy of the research attention it’s getting. I work on a variety of models across the AI and LLM space professionally, and I’ve seen how versatile it is; That said, I have also seen how over publicized it is. We’re clearly (from my perspective) in a bubble that will eventually pop. We’re claiming products use AI to do this and that across nearly every industry, and while LLMs in particular are amazing, and can be used in a ton of applications, it’s certainly not all of them— and I’m particularly cautious of putting new models in charge of dangerous or risky processes where they shouldn’t be before we develop adequate metrics, regulation, and guardrails. To summarize my position, I’m very excited to work towards developing them further, but I want to publicly express the notion that it’s not a silver bullet, and we need to develop legal frameworks for protecting people now, rather than later.

pup_atlas,

I doubt that would hold up in a court of law. The ability to record in public hinges on having no “reasonable expectation to privacy” while in public spaces. You DO have a reasonable right to privacy in the backyard of your own property, even if it’s visible from some public airspaces.

pup_atlas,

That has been a feature in all of their competitors for 10+ years.

pup_atlas,

It would definitely stop pretty much any counterfeit if they added some rudimentary depth data into the image format as well, within the signed contents. That way simply taking a picture of a monitor would be obviously detectable, and not alterable without removing the signing. It wouldn’t have to be a high resolution depth map at all either.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • khanakhh
  • kavyap
  • thenastyranch
  • everett
  • tacticalgear
  • rosin
  • Durango
  • DreamBathrooms
  • mdbf
  • magazineikmin
  • InstantRegret
  • Youngstown
  • slotface
  • JUstTest
  • ethstaker
  • ngwrru68w68
  • cisconetworking
  • modclub
  • tester
  • osvaldo12
  • cubers
  • GTA5RPClips
  • normalnudes
  • Leos
  • provamag3
  • anitta
  • lostlight
  • All magazines