@TCB13@lemmy.world
@TCB13@lemmy.world avatar

TCB13

@TCB13@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

TCB13,
@TCB13@lemmy.world avatar

Yet, we still don’t have a proper way to mirror the parts (or the entire) repository and/or have useful offline archives of flatpaks for certain cases.

Reaching service through domain from local network

I think i have a stupid question but i couldn’t find answer to it so far :( When i want to reach a service that i host on my own server at home from the local network at home, is using a public domain effective way to do it or should i always use server’s IP when configuring something inside LAN? Is my traffic routed through...

TCB13,
@TCB13@lemmy.world avatar

is using a public domain effective way to do it or should i always use server’s IP when configuring something inside LAN? Is my traffic routed through the internet somehow when using domain even in LAN or does my router know to not do this?

It depends.

If you control your router (not ISP provided) you can just go into the router settings and tell it to always resolve your public domain to the local machine IP. This will make it so any computer on the network running a DNS query will get a local IP for that domain instead of the public one. Quick and easy fix.

If you don’t control it / don’t apply the fix above, most likely your traffic is not routed through the internet because routers are usually configured for hairpinning / NAT loopback and they’ll simply forward the traffic internally.

You can test what’s going on by using the traceroute (or tracert on Windows) to find where the traffic is going. It will give you a line for each host your traffic has to go through in order to reach the destination. If you need help reading the output, just post it public IPs redacted.

TCB13,
@TCB13@lemmy.world avatar

But even if you don’t fully control the device, you can usually change DHCP DNS so that LAN clients will use your local DNS servers.

Not all ISPs allow this. Mine for instance doesn’t allow changing any LAN DHCP setting… fortunately they have an option to configure one of the ports as “bridge” and you’ll get a public IP there so I can just plug my own equipment and do whatever I want.

TCB13, (edited )
@TCB13@lemmy.world avatar

Does creating a VPN into my home network using my router increase my attack surface?

Yes, but it also provides the ability to access any resource in your network in a secure way.

It is typically less safe to expose 3 or 4 different services you want remote access than a single VPN daemon that is actually designed for that specific scenario and has mitigations for common attacks built in.

To make your setup secure you can consider a few steps:

  • Use Wireguard: don’t be afraid to expose the Wireguard port because if someone tries to connect and they don’t authenticate with the right key the server will silently drop the packets. An attacker won’t even know there’s something listening on that port / it will be invisible to typical IP scans / / will ignore any piece of traffic that isn’t properly encrypted with your keys;
  • Use a 5-digit port for your VPN - something like 23901 (up to 65535) will be way harder to find than typical ports like the default 51820 or 443;
  • Go full paranoid and use a firewall to restrict what countries or even days, hours access your server is allowed. Eg. only allow incoming connection from your country (wiki.nftables.org/wiki-nftables/…/GeoIP_matching). Be aware of what happens when you’re abroad;
  • Don’t port forward IPv6 if you don’t need it. Might be easier than dealing with a dual stack firewall and/or other complexities.

In a side note: a VPN doesn’t mean full access to your network either. You can setup a VPN endpoint that only allows access to a few specified services running on specific machines instead of the entire network. This will give you extra security if you’re into that.

TCB13,
@TCB13@lemmy.world avatar

You can also configure your server to only accept traffic on the VPN port coming from your home IP address if you’ve a static one. Or… only allow incoming connection from your country (wiki.nftables.org/wiki-nftables/…/GeoIP_matching). This will provide you an extra layer of security.

Either way don’t be afraid to expose the Wireguard port because an attacker won’t even know there’s something listening on that port as it will ignore any piece of traffic that isn’t properly encrypted with your keys;

TCB13,
@TCB13@lemmy.world avatar

If you want stability use the latest Debian. The point of those LTS kernels is more and more supporting IoT and other devices you can’t simply upgrade, but you want to keep secure… regular use cases can just usa a stable disto like Debian and you’ll never notice any kernel related issues.

TCB13, (edited )
@TCB13@lemmy.world avatar

Yes, this industry is pretty much a race to the bottom (when it comes to wages) by adding methodologies and micromanagement at every corner to make people more “productive”. It’s just sad to see that most people don’t realize that they’re in the race to the bottom just because IT is still paying more than average and/or doesn’t require as many certificates as other fields to get into. The downside of the lack of professionalization is that people abuse developers everyday and the benefits like having more freedom to negotiate your higher than average salary are quickly vanishing in the fact of ever more complex software and big consulting companies taking over internal development teams and departments companies used to have.

To make things worse cloud / SaaS providers keep profiting from this mess by reconfiguring the entire development industry in a way that favors the sell of their services and takes away all the required knowledge developers used to have when it came to development and deploying solutions. Companies such as Microsoft and GitHub are all about re-creating and reconfiguring the way people develop software so everyone will be hostage of their platforms. We now have a generation of developers that doesn’t understand the basic of their tech stack, about networking, about DNS, about how to deploy a simple thing into a server that doesn’t use some orchestration with service x or isn’t a 3rd party cloud xyz deploy-from-github service.

Consulting companies who make software for others also benefit from this “reconfiguration” as they are able to hire more junior or less competent developers and transfer the complexities to those cloud services. The “experts” who work in consulting companies are part of this as they usually don’t even know how to do things without the property solutions. Let me give you an example, once I had to work with E&Y, one of those big consulting companies, and I realized some awkward things while having conversations with both low level employees and partners / middle management, they weren’t aware that there are alternatives most of the time. A manager of a digital transformation and cloud solutions team that started his career E&Y, wasn’t aware that there was open-source alternatives to Google Workplace and Microsoft 365 for e-mail. I probed a TON around that and the guy, a software engineer with an university degree, didn’t even know that was Postfix was and the history of email.

All those new technologies keep pushing this “develop and deploy” quickly and commoditizing development - it’s a negative feedback loop that never ends. Yes I say commoditizing development because if you look at it those techs only make it easier for the entry level developer and companies instead of hiring developers for their knowledge and ability to develop they’re just hiring “cheap monkeys” that are able to configure those technologies and cloud platforms to deliver something. At the end of the they the business of those cloud companies is transforming developer knowledge into products/services that companies can buy with a click.

TCB13,
@TCB13@lemmy.world avatar

Maybe the issue was that you were using it to access some kind of Microsoft service and their improper IMAP implementation.

TCB13,
@TCB13@lemmy.world avatar

Not a single screenshot was provided.

TCB13,
@TCB13@lemmy.world avatar

^ Boils down to not being hostage to a single provider and whatever it offers.

TCB13,
@TCB13@lemmy.world avatar

Only annoying thing is not supporting ProtonMail out of the box.

That’s Protons fault, they’re the ones that decided to ignore all the open and standard e-mail, contacts and calendar protocols out there and built their custom-everything stack to keep you vendor-locked into their interfaces.

TCB13,
@TCB13@lemmy.world avatar

just more difficult to connect when the provider wants to keep things secure.

Proton could’ve just implemented everything they did with IMAP/SMTP on Thunderbird + OpenPGP with the same level of security, but they decided not to. Yes, their solution is convenient but also close to everything else.

TCB13,
@TCB13@lemmy.world avatar

NixOS is just another attempt at changing the way fundamental things are done so one day they can introduce some orchestration / repository / xyz payed solution. Yet another step in the commoditization of software development.

TCB13, (edited )
@TCB13@lemmy.world avatar

I don’t yet… but a few months ago nobody believed they could take on a sponsorship from Anduril. Nor that they would enact a somewhat vague policy guide pushing the ideia that the community is all that matters and that all further important decisions will be community driven without actually specifically defining “who” is the community.

TCB13,
@TCB13@lemmy.world avatar

Making a DNS server not respond to queries for a specific name is trivial for any DNS provider to implemen

It might not be that easy, you’re thinking about one single server running some kind of DNS server you’re familiar with. When we’re talking about Quad9, Cloudflare etc. were talking about hundreds of servers across the planet, highly distributed solutions that rely on multicast and other non-trivial techniques. If you’ve to change a system like that to add the ability to block something, trust me, it won’t take a few hours.

TCB13,
@TCB13@lemmy.world avatar

And blocking websites is trivial.

Nothing is trivial at scale. When we’re talking about Quad9, Cloudflare etc. were talking about hundreds of servers across the planet, highly distributed solutions that rely on multicast and other non-trivial techniques. If you’ve to change a system like that to add the ability to block something, trust me, it won’t take a few hours and a LOT of testing will be required before pushing into production.

TCB13,
@TCB13@lemmy.world avatar

DNS is literally distributed by design

You know you can setup a single instance of PDNS or other resolver and by default they all work in a non-distributed way. You assuming to much, and again while it is likely that most providers running custom stacks already have the functionally built in, it isn’t a guarantee and once you’ve thousands of servers running the same piece of software across the globe deploying updates and features becomes way slower and way harder.

TCB13,
@TCB13@lemmy.world avatar

Yes, it is likely that most providers running custom generic or custom stacks already have the functionally built in and also yes, adding an “if” is easy but then once you’ve thousands of servers running the same piece of software across the globe deploying updates and features becomes way slower and way harder. You’ve to consider tests, regressions, a way to properly store and sincronize the blocklists across nodes etc…

TCB13,
@TCB13@lemmy.world avatar

Nothing is “built into DNS”. DNS is a couple of RFCs that include specifications on how the thing should work. What features one implementation (software) has is decision of those who made it and nothing else.

TCB13,
@TCB13@lemmy.world avatar

I can do this in like 5 seconds with my PiHole and not only am I not a network engineer,

Exactly and consider Cloudflare for instance, adding an “if domain block” is easy but then once you’ve thousands of servers running the same piece of software across the globe deploying updates and features becomes way slower and way harder. You’ve to consider tests, regressions, a way to properly store and sincronize the blocklists across nodes etc…

I’m not saying it can be done, because it can. But it will take longer and it will be a problem for someone. Besides you only have that point and click interface in your PiHole that allows you to do it in .02 because someone spend a few hours developing the feature. :)

TCB13,
@TCB13@lemmy.world avatar

The ability to selectively respond to DNS requests is integral to the function of DNS.

The availability of such feature and how useful it might be to block something is dependent on the actual implementation (software) you’re using.

TCB13,
@TCB13@lemmy.world avatar

The thing is, everything you said is correct. But if you think they can just solve this globally for everyone and everything without delays by just pushing things their root servers or the first line of authoritative ones then what else can I say.

TCB13,
@TCB13@lemmy.world avatar

This is why when you update a record for your domain it’s updated globally in near real time with multiple provider

So, you know that “near real time” is different from actual real time.

TCB13,
@TCB13@lemmy.world avatar

Okay, so tell me something, in your “ideal world” replication-based only scenario, what happens if they’re ordered to take down a specific A record that has a very large TTL, more than a govt would like?

TCB13,
@TCB13@lemmy.world avatar

I was about to tell you that when I made the post I was more joking about it than actually being serious… but then after your systemd comment…

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • Durango
  • megavids
  • InstantRegret
  • cubers
  • GTA5RPClips
  • cisconetworking
  • ethstaker
  • osvaldo12
  • modclub
  • normalnudes
  • provamag3
  • tester
  • anitta
  • Leos
  • lostlight
  • All magazines