schizanon, (edited )

I hear that people think that a / server like / ought to be a discrete piece of , not just a device because:

  1. stability; no reboots on host update
  2. should get full control of NICs, probably for security sake?

Are these valid? Does the host have a lot of downtime? It seems to me that would be a deal breaker for a

What if I dedicate my NICs to the and use the wifi for management?

josephholsten,

@schizanon Mostly I want my edge router to have one interface for WAN, another for LAN, and I don’t want it virtualized because of spectre, hypervisor vulns, noisy neighbor.
Also, my brain refuses to make sense of the right way to configure routing with a virtualized edge router.
But I know there are much smarter people than me virtualizing their firewalls, and not just for SDN. I just haven’t learned modern networking well enough.

schizanon,

@josephholsten big same!

fwaggle,
@fwaggle@moodoo.org avatar

@schizanon Virtualizing network infrastructure is fine, with two caveats:

  • ensure you don't end up with a dependency loop, where in the event of a failure each thing depends on the other and won't start without it.

  • Virtualizing your home or office network connection is a good way to turn your homelab into homeprod, which has wildly different goals and SLA and you should not do it unless you live by yourself or with very understanding people who consent.

That C word is not optional.

mario,
@mario@hachyderm.io avatar

@schizanon in my experience Proxmox has been really stable and requires little maintenance.

My main concern is that I use my home lab as a place where I can try things out and where breaking stuff is okay.
Separate VMs should be enough to be safe but I wouldn’t want to find myself in the situation where I try something out, it doesn’t go according to plan and I have limited connectivity to fix the issue.

Moreover network is also special because everyone in my family uses it

mhamzahkhan,
@mhamzahkhan@intahnet.co.uk avatar

@schizanon I think a lot of this is down to preference.

  1. I mean if you are using a cluster, you can do live migration between hosts to ensure no downtime of your firewall. Besides, if you do your hypervisor and firewall updates for similar times, then the reboots will be the same, so it doesn't make a difference.

  2. Apart from security, also performance. Generally people want to max out the NICs on their firewall/router. Giving full access means it's not sharing with other VMs etc.

AngryAnt,
@AngryAnt@mastodon.gamedev.place avatar

@schizanon I want both my concerns and my performance crunching separated. But I still have an internal router VM in every node.

Regardless of frequency, having to also have my whole network down when I'm troubleshooting a pve node is not my idea of fun.

There's probably also something in there about not potentially messing with the security of the full network when I just want to real quick rig comms within a node for some new VM - so I just do it in the internal router.

schizanon,

@AngryAnt so if you need to expose a service to the internet you have to port forward through two routers?

AngryAnt,
@AngryAnt@mastodon.gamedev.place avatar

@schizanon No for that I would isolate the service on a separate node in an isolated network and have it form an outbound connection to a relay.

Security paranoia aside, it greatly increases flexibility: The host end only has to authenticate with the relay with strong asymmetric keys - no host requirement of IP or other network setup.

AngryAnt,
@AngryAnt@mastodon.gamedev.place avatar

@schizanon Meaning I can shove it on a laptop node and take it on the road if I want.

Or, more likely, seamlessly shuffle between dev/client/live hosts.

The relays are just thin VPS', hosted where-ever, with the right keys.

AngryAnt,
@AngryAnt@mastodon.gamedev.place avatar

@schizanon Ofc. some services come with relays already - like Plex and Steam. This manual relay thing I generally only do for in-development stuff.

Point is I don't feel a need to poke holes in my firewall and anything which indirectly offers an external service I treat as already lost, isolating as well as I can. Deployment to such services all goes via external services.

AngryAnt,
@AngryAnt@mastodon.gamedev.place avatar

@schizanon A fun side effect: I have my router on a UPS with a light isolated node holding some external services. It also has a 4G modem on it as secondary WAN.

The intent was to cover the scenario where someone digs over a fiber cable somewhere, but this setup also means that should we have a power cut, the node stays online via the 4G connection.

Don't want to think about the kind of mess involved in replicating that with a NAT punch-through setup.

blasteh,

@schizanon my opnsense is a vm, I just live migrate it to a different host if I need to do host maintenance.

schizanon,

@blasteh hmm... I will have two NUCs with dual 2.5GbE now... I could use the old one as a backup..

  • All
  • Subscribed
  • Moderated
  • Favorites
  • pfSense
  • DreamBathrooms
  • InstantRegret
  • ethstaker
  • magazineikmin
  • GTA5RPClips
  • rosin
  • modclub
  • Youngstown
  • ngwrru68w68
  • slotface
  • osvaldo12
  • kavyap
  • mdbf
  • thenastyranch
  • JUstTest
  • everett
  • cubers
  • cisconetworking
  • normalnudes
  • Durango
  • anitta
  • khanakhh
  • tacticalgear
  • tester
  • provamag3
  • megavids
  • Leos
  • lostlight
  • All magazines