selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

atomWood, in Namecheap ups its prices 9% for .com and .xyz this fall.

Cloudflare is also upping prices. Since Cloudflare sells domains at cost, I expect domain prices have simply increased.

Fidelity9373,

Aside from the obvious fighting and bidding over an already claimed single domain name, what factors into the inherent pricing of a domain?

Unaware7013,

Wholesalers squeezing buyers because they can.

notepass,

I just tried to check the pricing of domains at cloudflare and they just don’t have a list. You need to transfer a domain to see the price. So I will probably stay with inwx for the time being.

atomWood, (edited )

It’s accessible here, but I believe you have to log in to view it: dash.cloudflare.com/…/pricing

Edit: Here is a screenshot of the page imgur.com/a/wL0bEde

Shdwdrgn, in Ok, how do I start self-hosting?

It sounds like maybe you’re looking for a primer on how networking works across the internet? If so, here’s a few concepts to get you started (yeah unfortunately this huge post is JUST an overview), and note that every one of these services can also be self-hosted if you really want to learn the nuts & bolts…

DNS is the backbone of everything, it is the service that converts names like “lemmy.world” into an actual IP address. Think of it like the phone book of the internet, so like if you wanted to call your favorite pizza place you would find their name, and that would give you their phone number. Normally any domain that you try to reach has a fixed (or static) IP address which never (or rarely) changes, and when you register your domain you will point it to a DNS server that you have given authoritative access to provide the IP where your server can be found.

But what if you’re running a small setup at home and you don’t actually have a static IP for your server? Then you look to DDNS (Dynamic DNS) and point your domain’s DNS to them. There are several free ones available online you can use. The idea is you run a script on your server that they provide, and every time your IP from your ISP changes, the script notifies the DDNS service, they update their local DNS records so the next person looking for your domain receives the updated IP. There can be a little delay (up to a few minutes but usually only seconds) in finding the new address when your IP changes, but otherwise it will work very smoothly.

You mentioned DHCP, so here’s a quick summary of that. Basically you are going to have a small network at your home. Think of your internet router as the front-end, and everything behind it like you computers or mobile devices are going to be on their own little private network. You will typically find they all get an IP address starting with 192.168.* which is one or the reserved spaces which cannot be reached from the internet except by the rules your router allows. This is where DHCP comes in… when you connect a device it sends out a broadcast asking for a local network IP address that it is allowed to use. The DHCP server keeps track of the addresses already in use, and tells your device one that is free. It will also provide a few other local details, like what DNS server to use (so if you run your own you can tell the DHCP service to use your local server instead of talking to your ISP). So like the phone book analogy, your DHCP service tells all of your local devices what phone number they are allowed to use. other Now to put all of this together, you probably have a router from your ISP. That router has been pre-programmed with the DHCP service and what DNS servers to use (which your ISP runs). The router also acts like the phone company switchboard… if it sees traffic between your local devices like a computer trying to reach your web server, it routes those calls accordingly. If you are trying to get to google then the route sends your call to the ISP, whose routers then send your connection to other routers, until it finally reaches google’s servers. Basically each router becomes a stepping stone between your IP address and someone else’s IP address, bringing traffic in both directions.

OK so now you want to run a web server for your domain. This means that besides getting the DNS routing in place, you also need to tell your router that incoming web traffic needs to be directed to your web server. Now you need to learn port numbers. Web pages traffic on port 80, and SSL pages use port 443. Every type of service has its own port number, so DNS is port 53, ftp is port 21, and so on. Your router will have a feature called port-forwarding. This is used when you always want to send a specific port to a specific device, so you tell it that any incoming traffic on port 80 needs to be sent to the IP address of your web server (don’t worry, this won’t interfere with your own attempts to reach outside websites, it only affects connections that are trying to reach you).

Now if you’ve followed along you might have realized that even on your local network, DHCP means that your server’s own IP address can change, so how can you port-forward port 80 traffic to your web server all the time? Well you need to set a local static IP on your server. How that is done will be specific to each linux distribution but you can easily find that info online. However you need to know what addresses are safe to use. Log in to your router, and find the DHCP settings. In there you will also see a usable range (such as 192.168.0.100 to 192.168.0.199). You are limited to only changing the last number in that set, and the router itself probably uses something like 192.168.0.1. Each part of an address is a number between 0-255 (but 0 and 255 are reserved, so except in special cases you only want to use the numbers 1-254), so with my example of the address range being used by DHCP, this means that you would be free to use any address ending in 200-254. You could set the static IP of your web server to 192.168.0.200, and then point the port-forwarding to that address.

Now remember, your local IP address (the 192.168 numbers) are not the same as your external internet address. If you pay your provider for a static internet address, then your router would be programmed with that number, but your web server would still have its local address. Otherwise if you’re using DDNS then you would tell that service the outside IP address that your router was given by your ISP (who coincidentally is running a DHCP that gave your router that address).

Let me see if I can diagram this… OK so imagine your router has the internet address of 1.2.3.4, your web server has the local address of 192.168.0.200, and someone from the internet who has address 100.1.1.1 is trying to reach you. The path would be something like this:

100.1.1.1 -> (more routers along the way) -> your ISP -> 1.2.3.4 (router) -> 192.168.0.200 (server)

They send a request to get a web page from your server, and your server send the page back along the same path.

Yes there’s a lot to it, but if you break it down one step at a time you can think of each step as an individual router that looks to see if the traffic going to something on the outside or going to something on the inside. Which direction do I need to send this along? And eventually the traffic gets to a local network that says “hey I recognize this address and it needs to go over to this device here.” And the key to all of this routing is DNS which provides hints on where to forward the information to the next router in the path. I can break things down further for you if something isn’t clear but hopefully that gives you a broad overview on how things move around on the internet.

goddard_guryon,

Holy shit dude, that was actually very helpful! I’ll need a few more go-throughs to fully grasp every piece here, but thanks a ton for writing it so precisely.

Based on this though, is there no way to have port-forwarding except setting it up explicitly in my router? I ask this because 1) in my personal setup, I’ll be switching between wifi and mobile data quite often, and 2) I may end up on an institutional wifi after some time, in which case I won’t have access to the router

LinuxSBC,

Correct. What you’d need in that case is a reverse proxy like ngrok, which is a bit more difficult to set up.

Shdwdrgn,

Sorry for the late reply, we had a yard sale today and then company this evening so I’m just getting back to the computer. God this feels like the longest I’ve been offline in like a decade! Anyway I’m glad it was helpful, I literally rolled out of bed and this was the first post I read this morning so I just started typing, I was a bit worried I might have been incoherent writing so much before I even had breakfast but hey if it came across then I’m glad I helped!

Happy to see others have already stepped in to answer your questions, as with most things there are multiple solutions to your challenges but the first goal is to understand what’s going on so you know the right questions to ask. Get your basic setup in place, poke around to see how it all interconnects, and you’ll start understanding how other things come in to play. The bit I mentioned about being able to direct your port 80 incoming traffic without affecting your web browsing from another computer (which of course ALSO talks on port 80) really tripped me up until I realized I was overthinking things, so I just trusted that it would keep working and eventually I did find the answers.

So as I mentioned, yep there really is a lot of information to digest when you’re learning how networking works, but once you get some of these basics concepts down then the rest will start coming easier. A lot of what I have learned comes from taking a single piece, playing with it a bit to get the hang of essentially what it is doing without getting so deep that I’m overwhelmed, and then moving on to the next piece to see how they interconnect. If you do that long enough you’ll start coming back to the first pieces and going deeper into them. Or you’ll find some pieces that you can get by only knowing the basics and you’ll never need to dig any deeper. The big thing is having an overview of how things connect to each other because yeah, you’re going to want to try different things with your servers. Just wait until you build your first firewall with multiple internal networks and even multiple ISP connections (my home network has five local zones plus two ISPs – just because I can!)…

goddard_guryon,

Haha I think it’s best if I stop running towards just getting my own server up and actually learn this stuff instead, regardless of how long it takes. I’ll try to follow through on this, thanks again for all the help :D

Shdwdrgn,

Hey no worries, you’ll get there. Just kick back and enjoy the ride, because its a lot of fun learning all this stuff!

KrokanteBamischijf,

You might want to consider setting up a VPN tunnel to your own network. Main benefit is that you can access your home network as if you were connected to it locally. Which makes switching between mobile data and WiFi a non-issue.

This requires some sort of VPN server and usually a single port-forwarding rule for the protocol which your VPN software of choice uses. For the simplest default configuration of OpenVPN this means setting UDP port 1194 to point to your OpenVPN server.

Generally, keeping things simple, there’s two types of VPN you can set up:

  • split tunnel VPN, which gives you access to your home network but accesses the internet directly.
  • full tunnel VPN, which sends all of your traffic through your home router.

It is a little more complicated than that, and there’s more nuance to it, such as wether to user your own DNS server or not, but all that is best left to some further reading.

I’ve setup an OpenVPN server myself, wich is open source and completely free to mess around with. (Save for maybe some costs for registring your own domain or DDNS serviced. Those are all optional though, and mainly provide convienience and continuity benefits. You can definitely just setup a VPN server and connect with your external IP adress)

spez_,

Or use Tailscale, it’s quite easy and it’s how I access all of my services

Mars2k21,
Mars2k21 avatar

Honestly one of the most well written posts I've read. Thanks a lot, helped me understand all of this networking stuff involved with self-hosting since I literally just bought a PC to function as my home server like 2 days ago.

Shdwdrgn,

Thanks! Maybe if I hadn’t just climbed out of bed five minutes before writing that I might have been able to organize the info a little better, but apparently everyone is happy with it. 😄

Don’t get caught up on needed fancy new hardware to run servers from, the last new computer I bought was a 386. This Spring I just rolled my VM servers off of some 2006 rack servers (dual-core and 8GB of memory, getting a bit painful!), and I’m serving up live internet content. You can go a long ways with old hardware. I always say play with what you’ve got, or with the stuff other people are throwing away. By doing this you can push a machine to its limits to see what it can really handle, which gives you a good idea of what hardware you want to upgrade to for YOUR specific needs. My new servers are from 2012-2014, and a massive upgrade at about $150 each!

hmcn, in The Cloud Is a Prison. Can the Local-First Software Movement Set Us Free?

This article, as much as I agree with it, conflates cloud hosting and remote-only software design. Cloud hosting really is a prison, but mostly for developers that are lured by its convenience and then become dependent on its abstractions. What we experience today in most mainstream software isn’t necessarily coupled to cloud hosting, but is instead a conscious product design choice and business strategy to deny users power and control of their data. In short, cloud providers like AWS, Azure, and GCP are doing to software companies what those companies are doing to us. There is a way to use shared data centers without this kind of software design philosophy. As mobile continues to dominate, the solution we need likely involves remote servers but with a model that treats them with skepticism and caution, allowing data portability and redundancy across a variety of vendors. I should be able to attach a few hosting services to a software experience I use and transfer my data between them easily. The idea that local-first software is “freed from worrying about backends, servers, and [hosting costs]” is misleading, since my local device has to become the client and/or server if there is any connectivity happening over the internet. Wresting control of our data from the dominant software companies will require creating experiences that are not only different, but better, and doing that with a mobile phone passing between cell towers functioning as the server is a tall order. We have grown to expect more than intermittent connectivity with conflict resolution. Nonetheless, we absolutely should not accept the current remote-only software paradigm, but instead need to devise better ways to abstract how remote hosts are inhabited and create a simple multi-host option that is intuitive for consumers.

otl,
@otl@lemmy.sdf.org avatar

Great points. It’s the proprietary nature and lack of interoperability of “the cloud” that causes problems. My email is hosted on a remote server but I have control over my data. There’s no algorithm controlling what order I see my mail in or who I can forward stuff to. There are many different tools and clients available to me and to everyone else to work with their data.

Imagine if publishing a photo from my phone to Instagram meant copying a file from one folder to another. Or if I want to create an automatically translated voiceover from the captions of all my old Facebook photos in a video editor. Right now these operations require complex software. But the technology is all there and has been for a long time.

I often think about upspin.io

hmcn,

Exactly - interoperability is key, and is intentionally removed from many software platforms once they become big enough. Cory Doctorow writes about this here.

Companies have a funny relationship with interop. When companies are small and trying to build up their customer-base, they love interop, love the idea of selling ink for someone else’s printer or a way to read your wait­ing messages on someone else’s social media giant. Facebook once had a whole suite of interoperability tools to make it easy to plug Facebook into other services, but it has whittled these away over the years and today it routinely threatens and even sues rivals that try to interoperate with it.

A trend that I actually like is more software supporting using a user’s own iCloud or Google Drive as a data store rather than using the company’s own servers. The step that needs to take place is a way to use many storage providers simultaneously (including home server) with syncing behavior abstracted away. The software would essentially be a database cluster with a variety of heterogeneous nodes supported. A library that abstracts this multi-host pattern for use in both Android and iOS apps would go a long way. There is still the problem of the controller orchestrating uploads and syncs, though, which for most users would be their phone.

Upspin is new to me but looks like it’s right up this alley. Making the whole thing work for non-technical users will be one of the hard parts I imagine.

Edit: I also just saw, there is now Veilid.

UdeRecife,
@UdeRecife@literature.cafe avatar

Hey, you make a great point. There’s a false dichotomy being presented here. As you see it, local-first is a bit of a misnomer when you already expecting your device to join a remote environment.

Yes, makes sense that we’re being lured by the so-called cloud hosting. Following a business model that sells convenience in lieu of data control, cloud providers are distorting our current understanding of remote hosting. They’re breaking the free flow of information by siloing user data.

Now, with that being said, I’d like to add something about your presentation. I’d suggest you avoid walls of text. Use paragraph breaks. They’re like resting areas for the eyes. They allow the brain to catch up and gather momentum for the next stretch of text.

Regardless. You brought light to this conversation. For that, thank you.

hmcn,

I’m glad you found my take engaging!

Paragraph breaks now enabled.

downpunxx, in Cancelled Dropbox
downpunxx avatar

I'd switch to self hosting, but the owners a real lazy son of a bitch

shinnoodles, in (Re-)Introducing GameVault: The Self-Hosted Gaming Platform
@shinnoodles@lemmy.world avatar

The rebrand is great! I’m loving the icon, and am looking forward to seeing how this project progresses. I just have a few questions.

  1. Even beyond a Linux client, how about a Linux server package? I understand the client situation. Microsoft dominates in the desktop space, but it’s the complete opposite in the server space. Windows server is a super niche option. This severely limits the amount of people who can host this service imo.
  2. I get there’s a piracy disclaimer, but I do think it would just be better to change the “alternatively sourced” phrase all together. I feel that phrasing makes Gamevault a lot easier to attack for those who may not be fond of such a service existing. Maybe just say DRM-Free? It seems like the easiest way to dodge that sort of thing. Perhaps there’s a better way to phrase it that I’m not thinking of.

Overall, an awesome project! I know a lot of friends who can’t afford to buy a lot of games, and I’ve always wanted to share my library with them. It also made me think a lot more about how centralized PC gaming is nowadays. Nearly every seller distributes through Steam or Epic, and has some form of DRM. If Steam/Epic wanted to, your entire library or any game they chose could be deleted from the marketplace. Even if you have it downloaded locally, sadly a ton of games rely on the connection to Steam servers to function. Even if the games themselves are completely offline, or single player. Gog, Itch, and any other similar platforms are a rarity nowadays, and lot of the bigger publishers and developers don’t use them.

Apologies for the text wall, it was not originally supposed to be this long. I hope you got something out of my rambling. I look forward to when I can run this when Linux is (hopefully) better supported and the project matures to a point where I can transfer over. Maybe I’ll repurchase some games on Gog in that time. I do wonder how this’ll affect my experience with the Steam Deck…

alfagun74,

Hang on a sec, I nearly spewed my half-finished iced tea after reading that! Who on earth mentioned Windows servers :D? Our backend is completely containerized and operates on Alpine Linux. You could even run the server on a toaster if you’re up for it!

Appreciate the kind feedback. So, regarding the slogan, we’ve actually grown quite fond of it. It doesn’t suggest piracy in any way; it simply refers to games from alternative sources, like your DVD collection or a developer’s website.

shinnoodles,
@shinnoodles@lemmy.world avatar

Fuck, really? Now I feel like an idiot. Thanks for making me aware. Still praying for a Linux client, NixOS doesn’t play too well with things outside the Nix ecosystem. Maybe I can contribute to the Linux efforts when I gain the proper skills.

Fair enough, hopefully it doesn’t lead to any fuss with the big corpos if this project grows to a decent size. Gaming is pretty untouched in the self-hosted world, who knows what’ll happen. It’s pretty exciting tbh.

Off topic, is there any chance you guys can create a Matrix space and bridge it with the Discord? I’d love to chat, but I really am trying to move away from Discord.

alfagun74,

We’ll check Matrix out, and if we do bridge it, you’ll definitely hear about it in the blog :)

shinnoodles,
@shinnoodles@lemmy.world avatar

Awesome! Just to explain a bit, Matrix is a FOSS and decentralized chat protocol that’s more feature rich than XMPP, but also younger and less mature. It’s more popular than XMPP at the moment, and has a lot of nice and modern clients to pick from. I hope it can serve your team well if you guys decide to pick it up.

Die4Ever,
@Die4Ever@programming.dev avatar

for 2. maybe just say bring your own games? BYOG

shinnoodles,
@shinnoodles@lemmy.world avatar

That’s actually really good! I like that one a lot more.

numbers1492, in Should I be aware of something when buying a TV?

I’ve had no issues with my LG OLED. Picture quality is great and the UI doesn’t suck.

With the newer LG TV’s there is a jellyfin all. Ignore the people that say don’t connect it to the internet you probably don’t care and would be annoyed you can’t use the features anyway. For things that don’t have an app through the TV you can also use the browser that’s built in.

Be careful buying android tv boxes as they can be super sketchy way more so than name brand TV’s.

Roku boxes also seem to have an app for jellyfin that has been pretty reliable.

Edit: one annoying thing that seems comma among TV’s is that the ethernet is limited to 100mbps and you’ll get faster speeds through wifi.

fireshell, in Self-hosted Password Manager Recommendation?
@fireshell@lemmy.world avatar

You can install Vaultwarden instead of Bitwarden. Differences between Vaultwarden and Bitwarden by reference.

samwise,
samwise avatar

Also recommend vault warden its what I've been using for the past year since lastpass dropped the ball.

It has entries for totp, notes, and you can add more fields as needed. Ive been really happy with it

keyez,

Not sure why someone would. Bitwarden provides their own self-host repos and docs and is working on a unified container instead of docker-compose scripts for their production stack.

I’ve been using their stack for the last 6 years and only issues I’ve ran into were my fault. Also tested their container and will be switching to that soon.

Im1Random,

Don’t you have to pay to use premium features on your own server with their official software? With Vaultwarden you get all premium features unlocked for free on an infinite amount of devices.

keyez,

Yes that is true but $10/year for premium is not bad, I donate that much to separate projects per year

InverseParallax,

Strongly second vaultwarden, covets so many cases for me.

boothin,

Is it just me or does that "comparison" make no sense for this thread. It's mostly comparing vaultwarden to the cloud version of bitwarden, not the self hosted version. It only mentions the self hosted version in passing. It doesn't do anything to help someone choose between vaultwarden and self hosted bitwarden

innercitadel,

The article honestly reads like it was written by an AI tool.

jeena, (edited ) in Self-hosted Password Manager Recommendation?
@jeena@jemmy.jeena.net avatar

keepassxc.org as Password manager and 2FA and syncthing.net to sync the database between your devices without a central server.

  • You can have several databases (one for wife, one for you)
  • You can store your 2FA there
  • You can make nested groups of your passwords
  • You can store certificates and other attachments as files or custom fields like backup codes, etc.

Don’t use Keepass or KeepassX but the KeepassXC version is the community version most polished and with most functionality.

There are many 3rd party clients which can read/write the keepassx database file like:

Instead of Syncthing you can also use some other file sync if you have it set up already like iCloud, Nextcloud, Dropbox, but Syncthing I find is the easiest set up and forget.

mathemachristian,

I use keepassxc and save the DB in WebDAV. Can’t imagine it getting easier. Can access it from any device.

bazmatazable,

Doing pretty much the same thing but using the android app from AuthPass with backup to my Nextcloud. (It uses kdbx to store the passwords)

mathemachristian,

I sync webdav via Davx5 to my android. It integrates seamlessly

GnuLinuxDude,
@GnuLinuxDude@lemmy.ml avatar

I do this exact same setup but one thing to add to your answer and be aware of is that syncthing is not a backup solution. If you delete the files on one computer, those files will be deleted on the other synced devices. And accidents can happen.

So, as always, take backups.

Double_A,
@Double_A@discuss.tchncs.de avatar

Yeah, it always stresses me out when I see people saying that synchthing is a backup solution… (not that OP did here)

Lemmin,

Well, it's a great alternative for people who can't afford a local server. I just set it to one-way sync from phone to PC for backup, while for KeePass I just enable file versioning and use 2 databases to be safe from accidentally overwriting it.

Salzkrebs,

You can configure Syncthing to keep deleted/changed files for some time. So you could connect a Raspberry Pi to store everything read-only.

theterrasque, in Can I run local LLMs on Intel ARC/AMD with 8GB of RAM?

You can probably run a 7b LLM comfortably in system RAM, maybe one of the smaller 13b ones.

Software to use

Models

In general, you want small GGML models. huggingface.co/TheBloke has a lot of them. There are some superHOT version of models, but I’d avoid them for now. They’re trained to handle bigger context sizes, but it seems that made them dumber too. There’s a lot of new things coming out on bigger context lengths, so you should probably revisit that when you need it.

Each have different strengths, orca is supposed to be better at reasoning, airoboros is good at longer and more storylike answers, vicuna is a very good allrounder, wizardlm is also a notably good allrounder.

For training, there are some tricks like qlora, but results aren’t impressive from what I’ve read. Also, training LLM’s can be pretty difficult to get the results you want. You should probably start with just running them and get comfortable with that, maybe try few-shot prompts (prompts with a few examples of writing styles), and then go from there.

MigratingtoLemmy,

Thank you. I did have llama.cpp in mind but didn’t know where or how to start!

Do these models have a limit on how much information they can injest and how much they can improve relative to the information fed to them?

theterrasque, (edited )

LLM’s don’t ingest information as such. The text gets broken into tokens (parts of words, like “catch” can be “cat” + “ch” for example), and then run through training. Training basically learns the statistical likelyhood of which token follow an array of existing tokens. It’s in some ways similar to a markov chain, but of course much more complex. It has layers of statistics, and preprocessors that can figure out which tokens to give higher precedence in the input text.

Basically the more parameters, the more and subtler patterns it can learn. Smaller models are often trained on fewer tokens than bigger ones, but it’s still a massive amount. IIRC it’s something like 1T tokens for 7 and 13, and 1.4T tokens for 33b and 65b. In comparison to the models I linked, ChatGPT 3.5 is rumored to be 175b parameters.

In addition to just parameter size, you have quantization of the numbers. Originally in a model each parameter number is 16bit float, it turns out you can reduce it to 8bit int or even 4 and 3 bit with not too much hit at complexity. There’s different ways to quantize the parameters, with varying impact on the “smartness” of the model. By reducing the resolution of the numbers, the memory needed for the model is reduced, and in some cases the speed of running them is increased.

When it comes to training, the best results have been achieved with full 16bit fp, but there are some techniques to train on quantized models too. The results I’ve seen from that is less impressive, but it’s been a while since last I looked at it.

Edit: I mentioned qlora previously, which is for training quantized models. I think that’s only available for gpu though.

Edit2: This might be a better markov chain explanation than the previous link

MigratingtoLemmy,

Thanks! I know absolutely nothing about machine learning, some of the terms you mentioned didn’t quite register - but I’ll try reading up on it. I was going to run Llama.cpp or a derivative, a GUI sounds nice to have.

Do you suggest I wait for GPU prices to go down to aim for the 16GB models? The higher end GPUs are exorbitantly priced.

Cheers

theterrasque,

Just ask if you want some clarification.

As for GPU, I’m waiting… IMHO it’s just too expensive now. And sadly, Nvidia is currently the only game in town. Some software works on amd, but just about everything works on Nvidia.

That said, my PC has 48gb system ram, and I can run 65b models on it with about 1s per token. With a few layers offloaded to my 10gb GPU. That would otherwise require 2x 3090 or 4090 (2x4090 would be about 20x faster though…)

MigratingtoLemmy,

I certainly will! I’m just not very good with maths either, and although I know what floating point numbers are, I would have to read more about it to make sure I understand your comment.

Those are some insane requirements to run models haha. How long does it take for you to train your models on datasets (for me, a “dataset” would be my entire Reddit/Lemmy comment history)?

theterrasque,

Another thing, llama.cpp support offloading layers to gpu, you could try opencl backend for that for non-nvidia gpu’s. But llama.cpp can also run on cpu-only, with usable speed. On my system, it does about 150ms per token on a 13b model.

koboldcpp is probably the most straight forward to get running, since you don’t have to compile, it has a simple UI to set launch parameters, and it also have a web ui to chat with the bot in. And since it use llama.cpp it support everything that does, including opencl (clblast in launcher)

MigratingtoLemmy,

Thanks, I’ll take a look

SheeEttin, in Is moving to IPv6 worth it?

Depends on how you define “worth it”. Most selfhosting is done not for worth, but for a hobby.

Zorque,

Hobbies are often worthwhile. Maybe not financially, but often psychologically.

pachrist,

Some times not financially or psychologically, and they also make my wife mad when I fat finger some config.

OmltCat, in Anyone using "docker run" instead of "docker compose"?

Because it’s “quick start”. Least effort to get a taste of it. For actual deployment I would use compose as well.

Many project also have a example docker-compose.yml in the repository if you dig not so deep into it

There is www.composerize.com to convert run command to compose. Works ~80% of the time.

I honestly don’t understand why anyone would make “curl and bash” the officially installation method these days, with docker around. Unless this is the ONLY thing you install on the system, so many things can go wrong.

anonymoose,
@anonymoose@lemmy.ca avatar

Out of curiosity, is there much overhead to using docker than installing via curl and bash? I’m guessing there’s some redundant layers that docker uses?

Shrek,

Of course, but the amount of overhead completely depends per container. The reason I am willing to accept the -in my experience- very small amount of overhead I typically get is that the repeatability is amazing with docker.

My first server was unRAID (freebsd, not Linux), I setup proxmox (debian with a webui) later. I took my unRAID server down for maintenance but wanted a certain service to stay up. So I copied a backup from unRAID to another server and had the service running in minutes. If it was a package, there is no guarantee that it would have been built for both OSes, both builds were the same version, or they used the same libraries.

My favorite way to extend the above is Docker Compose. I create a folder with a docker-compose.yml file and I can keep EVERYTHING for that service in a single folder. unRAID doesn’t use Docker Compose in its webui. So, I try to stick to keeping things in Proxmox for ease of transfer and stuff.

anonymoose,
@anonymoose@lemmy.ca avatar

Makes sense! I have a bunch of services (plex, radarr, sonarr, gluetun, etc) on my media server on Armbian running as docker containers. The ease of management is just something else! My HC2 doesn’t seem to break a sweat running about a dozen containers, so the overhead can’t be too bad.

Shrek,

Yeah, that’s going to come completely down to the containers you’re running and the people who designed them. If the container is built on Alpine Linux, you can pretty much trust that it’s going to have barely any overhead. But if a container is built on an Ubuntu Docker image. It will have a bunch of services that probably aren’t needed in a typical docker container.

anonymoose,
@anonymoose@lemmy.ca avatar

Good point. Most containers I’ve used do seem to use Alpine as a base. Found this StackOverflow post that compared native vs container performance, and containers fair really well!

Shrek,

It seems like that data is from 2014 as well. I’m sure the numbers would have improved in almost ten years too!

macstainless,

Omg I never knew about composerize or it-tools. This would save me a ton of headaches. Absolutely using this in the future.

Shrek,

I used to host composerize. Now I host it-tools which has its own version and many other super helpful tools!

Heastes,

I was going to mention it-tools. It’s great!
And if you need more stuff in a similar vein, cyberchef is also pretty neat.

Shrek,

Nice! I wonder if there’s anything one has that the other doesn’t.

beaumains,

You have changed my life today.

Shrek,

No, the creator of it-tools did. I just told you about it. Give them a star on GitHub and maybe donate if you can ❤️

roofuskit, in Intel is quitting on its adorable, powerful, and upgradable mini NUC computers
roofuskit avatar

I think this has more to do with the refurbished small form factor business PCs eating up their market share as they flooded the market. I can get a decent i5 unit for $100and throw a $100 into it in upgrades and hit the same performance as their $300-400+ price range.

Thurgo,

I found an HP SFF for like $60 at the thrift store with a 4th gen i5 and it was kitted out with more ram and a 250gb SDD. Perfect HTPC for what I do. I was shopping NUCs too.

dan,
@dan@upvote.au avatar

Good find! I live in the San Francisco Bay Area and all the thrift stores near me are overpriced, so I never find good deals like that.

Thurgo,

I got this at my local overpriced thrift store and was surprised they didn’t want a shit ton for it. This place will put ebay listing (not even sold) prices on their electronics. I think it came out of their office or something.

marsokod, (edited ) in [ELI5] What is a reverse proxy exactly and how do I use it to run several dockerized services on one machine?
@marsokod@lemmy.world avatar

I’ll provide an ELI5, though if you actually want to use it you’ll have to go beyond ELI5.

You contact a web service via a combination of IP address and port. For the sake of simplicity, we can assume that domain name is equivalent to IP address. You can then compare domain name/port with street name/street number: you need both to actually find someone. By default, some street numbers are really standard, like 443 is for regular encrypted connection. But you can have any service on any street number, it’s just less nice and less standard. This is usually done on closed networks.

Now what happens if you have a lot of services and you want all of them reachable at address 443? Well basically you are now in the same situation as a business building with a lobby. Whenever you want to contact a service, you go to 443, ask the reception what floor they are in, and they will direct you there. The reception desk is your proxy: just making sure you talk to the right people.

blindbunny, in If anyone is near MN MyPillow is aucioning off some server equipment

I doubt they have the know how to clear that server, just saying.

PM_ME_UR_PCAPS,

They’ve helpfully labeled which ones they factory reset. I wonder what’s on those non-reset machines 🤔

const_void,

100% sure it’s the proof of the 2020 election fraud

match,
@match@pawb.social avatar

Can’t believe MyPillow is defending Biden like this

navi,
@navi@lemmy.tespia.org avatar

HUNTER BIDEN’S LAPTOP 42U SERVER RACK!

kratoz29, in What are your most used selfhosted services?

Pihole, Bitwarden and Plex.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • ngwrru68w68
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • kavyap
  • cubers
  • megavids
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • JUstTest
  • lostlight
  • All magazines