I'm not an IPv6 fundamentalist, but it really is time that our ISPs started allowing us to get IPv6 addresses if we want them. #AWS will no longer be providing free IPv4 addresses to their customers from February (that's only 3 months from now), and that includes non-permanent DHCP addresses, for all services. I expect a number of websites and services will decide there's no need to pay for IPv4 addresses when most of the Internet has at least rudimentary support for IPv6 by now, which means that they'll simply vanish from sight for most South African users.
And where AWS leads, other cloud providers eventually follow.
So who do we speak to to get things moving? All I get from the support channels is polite confusion, because there's no "Add new service" item on their choose-your-own-adventure support scripts, and no way to escalate to decision makers.
Edit: There are local ISPs who will allow you to get an IPv6 address. These are few and far between, though.
#Amazon 👉 apoya tus tiendas locales #Alexa 👉 Dicio o Mycroft #Appstore 👉 Aurora Store o Neo Store #Luna 👉 Parsec o Steam Remote Play #Music 👉 Nuclear, ViMusic o Spotube #AmazonPay 👉 PayPal, Wise... #PrimeVideo y #FireOS 👉 Jellyfin con WebOS #Twitch 👉 Kick o DLive #Ring 👉 lo integrado con Home Assistant #Kindle 👉 Kobo y Calibre #AWS 👉 Oracle o IMB Cloud #Drive 👉 Nextcloud o Cryptpad
With #googledomains being the latest product killed by Google, is anyone else trying to transfer their .dev domain out? #aws R53 doesn’t support the dev tld, and #cloudflare says it’s “coming soon” with no ETA… Any/all advice is welcome
A client asked me if it was a good idea to move to GCP, told him that regardless of the costs, he'll have a better chance of getting the Pope on the phone than meaningful support from Google.
The cloud equivalent of buying a used car. You're pretty much on your own. Plus very few professionals in our region are getting certified on it.
#AWS#Pulumi#TerraForm frens I have a weird thing. I rearranged the security group configuration for an EC2 instance config so that it has four CIDR blocks per inbound rule. This is intended for a multi-subnet SQL Server WFCL. The only changes are adding CIDR blocks for a third node that's in a different subnet. This is 4 CIDR blocks total - 1 in us-west-2-lax-1a, 1 in us-west-2-lax-1b, 1 in us-west-2a, and a 10.x block. Only 3 CIDR blocks get added per inbound rule. Is there a limit in Pulumi?
I have a situation where I have #kubernetes (#AWS#EKS) and I need to pull docker images from a private registry. The registry has an SSL cert applied to it from a custom/private CA meaning none of my worker nodes will accept it. What are my options? Do I need to get the CA into the cert store of my worker nodes? Can I tell Kube not to verify the cert (knowing this isn't the correct method but is it even an option)?
Asking for boosts..I'm not quite sure how to ask search engines this question
I've already hit #AWS#Lambda's default 5s timeout fetching just two #mastodon timelines with https://schizo.social. I'm going to need a more clever solution for making multiple API requests to build a single page. Maybe prefetching, maybe spawning a background process and updating the UI via a #webSocket when they complete? I'm trying to keep it as client-side #javaScript free as I can, but waiting on multiple masto servers just isn't going to work...
Hello #wordpress people - what’s the best way to host Wordpress in #AWS ? Im looking at ECS/fargate/EFS but want to hear real world solutions - do you just used a managed service?
Thank goodness our HA works because we've had the fourth incident of EC2 instances failing in as many weeks. All of them in the #AWS LAX local zone. Today was the biggest, practically our whole farm and both A and B availability zones. At least 100 instances.
Like what can it do that Simple #Queue Service can't? I know it's push based and that has it's benefits, and obvs it has integrations with SMS/email that probably come in handy, but those seem like improvements you could just make to #SQS instead of having a whole separate system.
My mental model on EC2 (possibly always wrong, possibly dated) is that it's a great fit if you need a virtual machine in the cloud that you can easily replace. Sometimes you need one, sometimes you need 10? Great. AWS needs you to stop using that particular hardware, spinning up something new and moving shouldn't be a big deal.
Is this true today?
Would it be a bad idea to stick something on EC2 that requires several person hours of effort if we have change instances?
Have been fighting with #aws infrastructure for last 24 hours, and definitely making progress - the us-west-2 zone is now experiencing network outages. #successhttps://xkcd.com/349/
Hmmm so question for you #MySQL/#AWS folk out there:
I always thought that, at least for databases that are less than gigantic, "mysqldump --single-transaction" was a good backup option.
Turns out that that doesn't work on AWS Aurora, apparently because it doesn't give you the necessary privileges for FLUSH TABLES WITH READ LOCK.
What am I missing? Is that option obsolete or no longer needed to get a consistent backup? Do people use completely different means to back up Aurora databases?