@pyrosis@lemmy.world
@pyrosis@lemmy.world avatar

pyrosis

@pyrosis@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Mirror all data on NAS A to NAS B

I’m duplicating my server hardware and moving the second set off site. I want to keep the data live since the whole system will be load balanced with my on site system. I’ve contemplated tools like syncthing to make a 1 to 1 copy of the data to NAS B but i know there has to be a better way. What have you used successfully?

pyrosis,
@pyrosis@lemmy.world avatar

My favorite is using the native zfs sync capabilities. Though that requires zfs and snapshots configured properly.

pyrosis,
@pyrosis@lemmy.world avatar

I noticed some updates on live video streaming. I do wonder if that will help in how jellyfin interepts commercial breaks.

Let’s say I have an m3u8 playlist with a bunch of video streams. I’ve noticed in jellyfin when they go to like a commercial the stream freaks out. It made me wonder if the player just couldn’t understand the ad insertion.

Anyway wonderful update regardless and huge improvement.

pyrosis,
@pyrosis@lemmy.world avatar

I use using docker networks but that’s me. They are created for every service and it’s easy to target the gateway. Just make sure DNS is correct for your hostnames.

Lately I’ve been optimizing remote services for reverse proxy passthru. Did you know that it can break streams momentarily and make your proxy work a little harder if your host names don’t match outside and in?

So in other words if you want full passthru of a tcp or udp stream to your server without the proxy breaking it then opening a new stream you would have to make sure the internal network and external network are using the same fqdn for the service you are targeting.

It actually can break passthru via sni if they don’t use the same hostname and cause a slight delay. Kinda matters for things like streaming videos. Especially if you are using a reverse proxy and the service supports quic or http2.

So a reverse proxy entry that simply passes without breaking the stream and resending it might ook like…

Obviously you would need to get the http port working on jellyfin and have ipv6 working with internal DNS in this example.


<span style="color:#323232;">server {
</span><span style="color:#323232;">    listen 443 ssl;
</span><span style="color:#323232;">    listen [::]:443 ssl;  # Listen on IPv6 address
</span><span style="color:#323232;">
</span><span style="color:#323232;">    server_name jellyfin.example.net;
</span><span style="color:#323232;">
</span><span style="color:#323232;">    ssl_certificate /path/to/ssl_certificate.crt;
</span><span style="color:#323232;">    ssl_certificate_key /path/to/ssl_certificate.key;
</span><span style="color:#323232;">
</span><span style="color:#323232;">    location / {
</span><span style="color:#323232;">        proxy_pass https://jellyfin.example.net:8920;  # Use FQDN
</span><span style="color:#323232;">        ...
</span><span style="color:#323232;">    }
</span><span style="color:#323232;">}
</span>
pyrosis,
@pyrosis@lemmy.world avatar

I agree with this. The only vm I have that has multiple interfaces is an opnsense router vm heavily optimized for kvm to reach 10gb speeds.

One of the interfaces beyond wan and lan is an interface that links to a proxmox services bridge. It’s a proxbridge I gave to a container and is just a gateway in opnsense. It points traffic destined for services directly at the container ip. It keeps the service traffic on the bridge instead of having to hit the physical network.

Move UnRaid from metal to Proxmox (lemmy.ml)

I have a trusty UnRaid server that has been running great for almost 3 years now, with some kinks and headaches here and there, but mostly very stable. Now I’m entertaining the idea of setting that box up with ProxMox, and running UnRaid virtualized. The reason being that I want to use UnRaid exclusively as a NAS and then run...

pyrosis,
@pyrosis@lemmy.world avatar

To most of your comment I completely agree minus the freedom for choosing different disk sizes. You absolutely can do that with btrfs or just throwing a virtual layer on top of some disks with something like mergerfs.

pyrosis,
@pyrosis@lemmy.world avatar

It’s the production vs development issue. My advice is the old tech advice. “If it’s not broken don’t try to fix it”

Modified into a separate proxmox development environment. Btw proxmox is perfect for this with vm and container snapshots.

When you get a vm or container in a more production ready state then you can attempt migrations. That way the users don’t kill you :)

pyrosis,
@pyrosis@lemmy.world avatar

Have you considered the increase in disk io and that hypervisor prefer to be in control of all hardware? Including disks…

If you are set on proxmox consider that it can directly share your data itself. This could be made easy with cockpit and the zfs plugin. The plugin helps if you have existing pools. Both can be installed directly on proxmox and present a separate web UI with different options for system management.

The safe things here to use are the filesharing and pool management operations. Basically use the proxmox webui for everything it permits first.

Either way have fun.

pyrosis,
@pyrosis@lemmy.world avatar

It depends on your needs. It’s entirely possible to just format a bunch of disks as xfs and setup some mount points you hand to a union filesystem like mergerfs or whatever. Then you would just hand that to proxmox directly as a storage location. Management can absolutely vary depending how you do this.

At its heart it’s just Debian so it has all those abilities of Debian. The web UI is more tuned to vm/lxc management operations. I don’t really like the default lvm/ext4 but they do that to give access to snapshots.

I personally just imported an existing zfs pool into proxmox and configured it to my liking. I discovered options like directly passing datasets into lxc containers with lxc options like lxc.mount.entry

I recently finished optimizing my proxmox for performance in regards to disk io. It’s modified with things like log2ram, tmpfs in fstab for /tmp and /var/tmp, tcp congestion control set to cubic, a virtual opnsense heavily modified for 10gb performance, a bunch of zfs media datasets migrated to one media dataset and optimized for performance. Just so many tweaks and knobs to turn in proxmox that can increase performance. Folks even mention docker I’ve got it contained in an lxc. My active ram usage for all my services down to 7 gigs and disk io jumping .9 - 8%. That’s crazy but it just works.

pyrosis,
@pyrosis@lemmy.world avatar

Yup you can. In fact you likely should and will probably find yourself improving disk io dramatically compared to your original thoughts doing this. It’s better in my opinion to let the hypervisor manage disks operations. That means in my opinion it should also share files with smb and NFS especially if you are already considering nas type operations.

Since proxmox supports zfs out of the box along with btrfs and even XFS you have a myriad of options. You combine that with cockpit and you have a nice management interface.

I went the zfs route because I’m familiar with it and I appreciate it’s native sharing options built into the filesystem. It’s cool to have the option to create a new dataset off the pool and directly pass it into a new lxc container.

pyrosis,
@pyrosis@lemmy.world avatar

Bookmark this if you utilize zfs at all. It will serve you well.

jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/

You will be amused with zfs performance in proxmox due to all the tuning that is possible. If this is going to be an existing zfs pool keep in mind it’s easier to just install proxmox with the zfs option and let it create a zfs rpool during setup. For the rpool tweak a couple options. Make sure ashift is at least 12 during the install or 13 if you are using some crazy fast SSD as proxdisk for the rpool.

It needs to be 12 if it’s a modern day spinner and probably a good setting for most ssds. Do not go over 12 if it’s a spinning disk.

Now beyond that you can directly import your existing zfs pool into proxmox with a single import command. Assuming you have an existing zfs pool.

In this scenario zfs would be fully maintaining disk operations for both an rpool and a media pool.

You should consider tweaking a couple things to really improve performance via the guide de I linked.

Proxmox vms/zvols live in their own dataset. Before you start getting to crazy creating vms make sure you are taking advantage of all the performance tweaks you can. By default proxmox sets a default record size for all datasets to 128k. qcow2, raw, and even zvols will benefit from record size of 64k because it tends to improve the underlying filesystem performance of things like ext4, XFS, even UFS. Imo it’s silly to create vm filesystems like btrfs if you’re vm is sitting on top of a cow filesystem.

Another huge improvement is tweaking the compression algorithm. lz4 is blazing fast and should be your default go to for zfs. The new one is pretty good but can slow things down a bit for active operations like active vm disks. So make sure your default compression is lz4 for datasets with vm disks. Honestly it’s just a good default to specify for the entire pool. You can select other compressions for datasets with more static data.

If you have a media dataset full of files like music, vids, pics. Setting a record size of 1mb will heavily improve disk io operations.

In proxmox it will default to grabbing half of your memory for arc. Make sure you change that after install. It’s a file that defines arc_max in byte number format. Set the max to something more reasonable if you have 64 gigs of memory. You can also define the arc_min

Some other huge improvements? If you are using an SSD for your proxmox install I highly recommend you install log2ram on your hypervisor. It will stop all those constant log writes on your SSD. It will also sync them to disk on a timer and shutdown/reboot. It’s also a huge performance and SSD lifespan improvement to migrate /tmp and /var/tmp to tmpfs

So many knobs to turn. I hope you have fun playing with this.

pyrosis,
@pyrosis@lemmy.world avatar

Another thing to keep in mind with zfs is underlying vm disks will perform better if the zfs pool is a type of mirror or stripe of mirrors. Z1 Z2 type pools are better for media and files. Cm disk io will improve on the mirror type style dramatically. Just passing what I’ve learned over time in optimizing systems.

pyrosis,
@pyrosis@lemmy.world avatar

At its core cockpit is like a modern day webmin that allows full system management. So yes it can help with creating raid devices and even lvms. It can help with mount points and encryption as well.

I do know it can help share whatever with smb and NFS. Just have a look at the plugins.

As for proxmox it’s just using Debian underneath. That Debian already happens to be optimized for virtualization and has native zfs support baked in.

cockpit-project.org/applications

pyrosis,
@pyrosis@lemmy.world avatar

Pretty much this it gets it’s own folder and in jellyfin it’s own library. You just give mom access to this and whatever else you want to. you unselect that library for everyone else. The setting is under users. It’s straightforward and is a check mark based select. You probably have it set to all libraries right now. Uncheck that and you can pick and choose per user.

pyrosis,
@pyrosis@lemmy.world avatar

How about defense against dhcp option 121 changing the routing table and decloaking all VPN traffic even with your kill switch on? They got a plan for that yet? Just found this today.

www.leviathansecurity.com/blog/tunnelvision

pyrosis,
@pyrosis@lemmy.world avatar

Of course but you don’t control rogue dhcp servers some asshat might plug in anywhere else that isn’t your network

pyrosis,
@pyrosis@lemmy.world avatar

I doubt it would matter in some environments at all.

As an example a pc managed by a domain controller that can modify firewall rules and dhcp/dns options via group policy. At that point firewall rules can be modified.

pyrosis,
@pyrosis@lemmy.world avatar

Setups for hardware decoding are based on the underlying OS. An example quite common is docker on Debian or Ubuntu. You will need to pass the appropriate /dev/ directories and at times files into your jellyfin docker container with the device environment variable. Commonly that would be /dev/dri

It gets more complicated with a vm because you are likely going to be passing the hardware directly into the vm which will prevent other devices outside the vm from using it.

You can get around this by placing docker directly on the os or placing docker in a Linux container with appropriate permissions and the same devices passed into the Linux container. In this manner system devices and other services will still have access the the video card.

All this to say it depends on your setup and where you have docker installed how you will pass the hardware into jellyfin. However jellyfin on docker will need you to pass the video card into the container with the device environment variable. Docker will need to see the device to be able to do that.

pyrosis,
@pyrosis@lemmy.world avatar

Nothing but love for that project. I’ve been using docker-ce and docker-compse. I had portainer-ce but just got tired of it. It’s easier for me to just make a compose file and get things working exactly like I want.

pyrosis,
@pyrosis@lemmy.world avatar

Are you using tvheadend and their jellyfin plugin? Asking out of curiosity.

github.com/tvheadend/tvheadend

Anyway Plex and emby come to mind.

pyrosis,
@pyrosis@lemmy.world avatar

Oh then definitely tvheadend. You can run the server lots of ways even docker. Also has plugin support.

pyrosis,
@pyrosis@lemmy.world avatar

Music playlists are different from Plex. You can create them import them or generate an instant list.

4k is seamless and performs better imo. You can use transcoding or not if you have files they way you want them. If you do you can select on a per user basis who gets to transcode.

You can set bandwidth limits.

I’ve seen a feature to allow multi user streaming the same movie so you ig watch at the same time. I use npm and often a couple peeps might watch a movie at the same time without using this feature and works fine

I use the client app on Android and a firestick atm. I think I just downloaded it but you can side load too if you want. The media server app is available for various os. So technically you could set it up on whatever you want. Just check your app store

jellyfin.org/downloads/clients/

It can plug into homebrew or m3u playlists for live tv if that is your suggestion. It has a plugin for nextpvr and tvheadend if you utilize those for over the air or already have an m3u setup too in those pvr services. Those are great btw and available in docker containers.

It always defaulted to what I have my files encoded. It absolutely can transcode to support other clients and you decide preferences. I did notice since most of my files are h.264 with few h265 sometimes it helped to turn off transcoding for me because the client supported it natively. Jellyfin was transcoding h265 mkv to like an MP4. Anyway a quirk

Login is pretty simple. Passwords users can change. Has codes it can generate to approve a new device if you are already logged into an app on your phone. Like 6 temp numbers. Can also setup pins or whatever they call them under users.

pyrosis,
@pyrosis@lemmy.world avatar

I’ll be honest op if it’s on a TV I use the newer fire sticks with the jellyfin app. They already have support for various codecs and stream from my server just fine. Cheap too and come with a remote.

If I were just trying to get a home made client up I would consider Debian bookworm and just utilize the Deb from the GitHub link here…

jellyfin.org/downloads/clients/

Personally I’d throw on cockpit to make remote administration a bit easier and setup an auto start at login for the jellyfin media player with the startup apps. You can even add a launch variable to launch it full screen like…


<span style="color:#323232;">jellyfin --fullscreen
</span>

The media player doesn’t really need special privileges so you could create a basic user account just for jellyfin.

pyrosis,
@pyrosis@lemmy.world avatar

Usually a reverse proxy runs behind the firewall/router. The idea you are pointing 80/443 at the proxy with port forwarding once traffic hits your router.

So if someone goes to service.domain.com

You would have dynamic DNS telling domain.com the router is the IP.

You would tell domain.com that service.domain.com exists as a cname or a record. You could also say *.domain.com is a cname. That would point any hosttname to your router.

From here in the proxy you would say service.domain.com points to your services IP and port. Usually that is would be on the lan but in your case it would be through a tunnel.

It is possible and probably more resource efficient to just put the proxy on the VPS and point your public domain traffic directly at the VPS IP.

So you could say on the domain service.domain.com points to the VPS IP as an a record. Service2.domain.com points to the VPS IP as another a record.

You would allow 80/443 on the VPS and create entries for the services

Those would look like the service.domain.com pointing to localhost:port

In your particular case I would just run the proxy on the public VPS the services are already on.

Don’t forget you can enable https certificates when you have them running. You can secure the management interface on its own service3.domain.com with the proxy if you need to.

And op consider some blocklists for your vps firewall like spamhaus. It wouldn’t hurt to setup fail2ban either.

pyrosis,
@pyrosis@lemmy.world avatar

It’s definitely encrypted they can just tell by signature that it is wireguard or whatever and block it.

They could do this with ssh if they felt like it.

pyrosis,
@pyrosis@lemmy.world avatar

You can do that or you can use a reverse proxy to expose your services without opening ports for every service. With a reverse proxy you would point port 80 and 443 to the reverse proxy once traffic hits your router/firewall. In the reverse proxy you would configure hostnames that point to the local service IP/ports. Reverse proxy servers like nginx proxy manager then allow you to setup https certificates for every service you expose. They also allow you to disable access to them through a single interface.

I do this and have setup some blocklists on the opnsense firewall. Specifically you could setup the spamhaus blocklists to drop any traffic that originates from those ips. You can also use the Emerging Threats Blocklist. It has spamhaus and a few more integrated from dshield ect. These can be made into simple firewall rules.

If you want to block entire country ips you can setup the GeoIP blocklist in opnsense. This requires a maxmind account but allows you to pick and choose countries.

You can also setup the suricatta ips in opnsense to block detected traffic from daily updates lists. It’s a bit more resource intensive from regular firewall rules but also far more advanced at detecting threats.

I use both firewall lists and ips scanning both the wan and lan in promiscuous mode. This heavily defends your network in ways that most modern networks can’t even take advantage.

You want even more security you can setup unbound with DNS over TLS. You could even setup openvpn and route all your internal traffic through that to a VPN provider. Personally I prefer having individual systems connect to a VPN service.

Anyway all this to say no you don’t need a VPN static IP. You may prefer instead a domain name you can point to your systems. If you’re worried about security here identify providers that allow crypto and don’t care about identity. This is true for VPN providers as well.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • InstantRegret
  • magazineikmin
  • cisconetworking
  • khanakhh
  • Youngstown
  • rosin
  • mdbf
  • slotface
  • Durango
  • ngwrru68w68
  • thenastyranch
  • kavyap
  • DreamBathrooms
  • JUstTest
  • cubers
  • osvaldo12
  • normalnudes
  • GTA5RPClips
  • everett
  • ethstaker
  • modclub
  • tacticalgear
  • megavids
  • Leos
  • anitta
  • tester
  • lostlight
  • All magazines