It was designed to work with "alternatively obtained" games such as DRM-free games. While Crackpipe can be used with cracked games, it does not encourage or condone piracy.
People, come on.
First of all the name, the logo of a pirate, using the terminology "alternatively obtained" - this is clearly for sharing cracked/pirated games. Any plausible deniability is out the window. Especially with using copyrighted game box arts in the screenshots.
If you changed the language to be something like:
It's designed to assist with sharing games with friends by providing a mechanism for downloading and managing game installations. Please review your game's licenses to ensure this is an acceptable use before sharing.
Then you'd be able to say "this is meant for sharing freeware/shareware easily and making it a social experience."
Also change the name and logo, and get those copyrighted box arts out of the screenshots and just use art from open source games SuperTaxKart, OpenRA, etc. (Technically those may be copyrighted, depends on each game, but at least you're not dealing with fucking Sony by showing a Spider-Man game).
You can absolutely self host LLMs. HELM team has done an excellent job benchmarking the efficiency of different models for specific tasks so that would be a good place to start. You can balance model performance for your specific task with the model’s efficiency - in most situations, larger models are better performing but use more GPUs or are only available via APIs.
There are currently 3 different approaches to use AI for a custom task and application -
Train a base LLM from scratch - this is like creating your own GPT-by_autopilot model. This would be the maximum level of control, however the amount of compute, time, and data required for training does not make this an ideal approach for the end user. There are many open source base LLMs already published on HuggingFace that can be used instead.
Fine-tune a base LLM - starting with a base LLM, it can be fine tuned for a certain set of tasks. For example, you can fine tune a model to follow instructions or use as a chatbot. InstructGPT and GPT3.5+ are examples of fine tuned models. This approach allows you to create a model that can understand a specific domain or a set of instructions particularly well as compared to the base LLM. However, any time that training a large model is needed, it will be an expensive approach. If you are starting out, I’ll suggest exploring this as a v2 step for improving your model.
Prompt engineering or indexing using an existing LLM - starting with an existing model, create prompts to achieve your objective. This approach gives you the least control over the model itself, but is the most efficient. I would suggest this as the first approach to try. Langchain is the most widely used tool for prompt engineering and supports using self hosted base- or instruct-LLM. If your task is search and retrieval, an embeddings model is used. In this scenario, you generate embeddings for all your content and store the embeddings as vectors. For a user query, you then convert it to an embedding using the same model, and finally retrieve the most similar content based on vector similarity. Langchain provides this capability, but IMO, sentence-transformers may be a better starting point for a self hosted retrieval application. Without any intention to hijack this post, you can check out my project - synology-photos-nlp-search - as an example of a self hosted retrieval application.
To learn more, I have found the recent deeplearning.ai short courses to be quite good - they are short, comprehensive, and free.
Federation is implemented by copying the content from other servers to your database and file system, so if your users subscribe to something from a different server it will be copied to your server.
But it will be only served to your users, not to the public. Only the communities hosted on your instance will be served to the public.
You can if someone else subscribed to it in the past. If nobody ever did, then that community is unknown to kbin and you won't find any data on it whether you're logged in or not.
My understanding is that instances have worker threads that continually pull new data from linked communities (the ones at least 1 person is subscribed to). It should be almost instant but recently it's sometimes delayed due to huge influx of traffic.
I’d prefer if we stopped bringing up Reddit altogether. We no longer use the platform, we should be happy with what we have here instead of constantly peeping into the neighbor’s garden.
drm free isn't really something you can guarantee unless you're fine with some games simply not being available on your platform. Some devs insist on it, and if you require drm free, they won't sell on your platform.
Personally, I prefer drm free, but if a platform only has it, and the game I want thus isn't there, I won't use it.
cloud saves is nice, but with decentralized solution that gets harder to do. who is "the cloud"?
It’s unironically one of the best self hosted software there is. Its tags management feature is unmatched, I wish a similar system existed in Paperless.
There are quite a few github repos with projects filled with buzzwords and other bullshit where i still dont understand wtf the software is actually for 🤔. I dont get it.
God, yes! I see updates like this and just go, “Cool, moving on” because I have no idea what it does, and if you’re trying to get me to adopt your cool thing, I’m going to need more then what appears to be some random strings of words. 😅
auto manufacturers had violated Washington state’s privacy laws by using vehicles’ on-board infotainment systems to record and intercept customers’ private text messages and mobile phone call logs.
But the appellate judge ruled Tuesday that the interception and recording of mobile phone activity did not meet the Washington Privacy Act’s standard
Privacy is a fundamental human right.
Just not in Usa, as it seems. Here it is indeed the law that needs to be fixed.
They even have a term for this — local-first software — and point to apps like Obsidian as proof that it can work.
This touches on something that I’ve been struggling to put into words. I feel like some of the ideas that led to the separation of files and applications to manipulate them have been forgotten.
There’s also a common misunderstanding that files only exist in blocks on physical devices. But files are more of an interface to data than an actual “thing”. I want to present my files - wherever they may be - to all sorts of different applications which let me interact with them in different ways.
Only some self-hosted software grants us this portability.
I want to present my files - wherever they may be - to all sorts of different applications which let me interact with them in different ways.
Only some self-hosted software grants us this portability.
I’d say almost everything is already covered with Samba shares and docker bind mounts. With Samba shares the data is presented across network to my Kodi clients, the file browser on my phone, and the file browsers of all my computers. And with docker bind mounts those files are presented to any services that I want to run.
Devil’s advocate: what about the posts and comments I’ve made via Lemmy? They could be presented as files (like email). I could read, write and remove them. I could edit my comments with Microsoft Word or ed. I could run some machine learning processing on all my comments in a Docker container using just a bind mount like you mentioned. I could back them up to Backblaze B2 or a USB drive with the same tools.
But I can’t. They’re in a PostgreSQL database (which I can’t query), accessible via a HTTP API. I’ve actually written a Lemmy API client, then used that to make a read-only file system interface to Lemmy (pkg.go.dev/olowe.co/lemmy). Using that file system I’ve written an app to access Lemmy from a weird text editing environment I use (developed at least 30 years before Lemmy was even written!): lemmy.sdf.org/post/1035382
That makes sense. I think the reason why they’re not represented as files is pretty simple. Data integrity. If you want to get the comments you just query the table and as long as the DB schema is what you expect then it’ll work just fine and you don’t have to validate that the data hasn’t been corrupted (you don’t have to check that a column exists for example). But with files, every single file you need to parse and validate because another application could have screwed them up. It’s certainly possible to build this, it might be slower but computers are pretty fast these days, but it would require more work to develop to solve the problem that the database solves for you.
This is what i did. In europe, viable options start at 200€ on ebay (imo). If your use case outgrows one lenovo tiny (which is unlikely since you’re coming from a pi), you can buy more / other tiny pcs / a desktop pc / a server rack and put proxmox on everything for running services inside a cluster.
NO! This is terrible science reporting! The study doesn't say it MAKES you sad, the report said it's ASSOCIATED with being sad. There was NO causation in this study. It could just as easily be that sadness leads to eating processed food, which seems at least as likely as the other way around. And it could equally as likely be that some 3rd factor is what's causing both the junk food eating and the sadness.
OP if you care at all about being honest and not spreading misinformation then you should delete your blog post and this lemmy post.
selfhosted
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.