@ramsey@maccath@GeeH The 500s aren't file size related. The thumbnail generator is tripping over something else file format wise (I posted some more details to our admin group /cc @derickr), which is probably why I haven't been able to repro when I've tried.
Thanks to borrowing an eSIM from @zonuexe, I've now tested all four physical Japanese mobile networks, including the elusive Rakuten Mobile. My phone didn't lock onto their #5G, but seeing #LTE on them was good enough.
That's in addition to LTE and 5G on SoftBank and NTT, plus LTE on KDDI, that I was able to source myself :)
Yep, I'm that much of a cell network nerd. Any questions?
1 TB of storage (an outrageous amount of storage if you ask me) is what you need for 35 active users, and it'll cost you €14/mo at Scaleway (European cloud provider). On top of that comes €38/mo for the web server, counting in at €52/mo.
Can you imagine how expensive it gets for a popular instance?
@thor The costs don't scale linearly. Storage requirements on the object storage side don't get all that much worse, and you can get $6/TB/mo from a few places (Vultr, Backblaze B2, Wasabi), and while there are definite breakpoints on compute costs "enough to run the Rails app that is Mastodon" scales you up a fair amount.
Is there anyone in Kobe/Osaka/Tokyo who can hook me up with a Rakuten Mobile physical or eSIM? Probably impossible since I don't have a Japanese ID card, but would be cool to test all four mobile networks here.
@chaos0815@nauleyco Yeah. I already have Airalo, Ubigi, Saily, and (via US Mobile) eSIM Club. Between those I can access SoftBank, KDDI/au, and NTT, with 5G on SoftBank and NTT. Technically I'm missing 5G on au as well but I'm not sure I can get that as a foreigner.
In which case, curious how much of your time you spend on their network, and how performance is on 4G and 5G, and how much time you have to fall back to roaming on...KDDI, right?
@zonuexe@nauleyco@chatii@Girgias@tekimen@KentarouTakeda Yeah, I have a couple of physical SIMs that run on docomo LTE and don't bounce out of the country for routing. I also have a bunch of eSIMs. One has docomo 5G, one has SoftBank 5G, one has SoftBank and KDDI LTE, and one has...docomo LTE I think? Most route through Singapore but one routes through Hong Kong.
The SoftBank 5G eSIM actually not 700 Mbps down, 80 Mbps up earlier today near Shin-Kobr, which was impressive.
Instead of having two PHP/PHP-adjacent conferences in Texas in the same year, we're taking a year off, and throwing our weight behind @cascadia in Portland, Oregon.
That's right. The Pacific Northwest is getting a PHP conference for the first time since 2019!
#MastoAdmin PSA: Vultr cloud storage costs are now down to $6/TB/mo.
Backblaze B2 still has the edge on data transfer costs (they bundle data transfer up to 3x the storage amount) but IIRC bandwidth is pooled with cloud servers on Vultr anyway, so you probably won't pay for outbound anyway, and it's one less vendor in the stack to manage. Which is nice when you have better things to do than babysit various vendor accounts as part of running an instance :)
@Whiskeyomega One extra fun thing is on their (beta) CDN you can literally check a box to block LLM crawler user agents. Which is nice for, say, media hosting for a fediverse instance.
(we're still on bunny right now for CDN, as there's no way in the Vultr UI...yet...to use our own subdomain for the CDN)
They use #OpenAI, which means my GitHub OSS has almost certainly been used in training data.
They rely on OpenAI's promise to not ingest any code that is used for "context".
They specifically do not disclaim that their tool could result in me violating someone else's copyright, and they could suggest the same code to someone else, too.
Uninstall this crap, now. It's dangerous and irresponsible
@johne@Crell You can uninstall the JetBrains AI plugin without uninstalling the entire IDE, so...that's sufficient IMO.
There's also LLM-enabled code completion you can turn on, but that's all run locally, and since OpenAI doesn't give you stuff you can run locally, that's safe I think.
EDIT: I meant to say ML-enabled here, not LLM enabled here. Management regrets the error.
@Crell@johne That would be something worth asking JetBrains about. The times I've turned ML autocomplete on it more or less matched the idioms of the codebase I was working on, and didn't expand things out significantly, so maybe it's pulling from something sufficiently small (and it's marked as ML based, not LLM based)