theterrasque

@theterrasque@infosec.pub

This profile is from a federated server and may be incomplete. Browse more on the original instance.

theterrasque,

Llama3 70b is pretty good, and you can run that on 2x3090’s. Not cheap, but doable.

You could also use something like runpod to test it out cheaply

theterrasque,

Koboldcpp is way easier. Download exe, double click exe, open gguf file with the AI model, click start.

Then put on your robe and wizard hat

theterrasque,

I’m not saying it’s broken, but it has some design choices and functions that makes even Whatsapp a better choice for privacy minded people. Like rolling their own crypto and not having e2ee as default.

theterrasque,

So you’re saying it’s already feature complete with most json libraries out there?

theterrasque,

You realise there is no algorithm behind Lemmy, right?

Of course there is. Even “sort by newest” is an algorithm, and the default view is more complicated than that.

You aren’t being shoved controversial polarizing content subliminally here.

Neither are you on TikTok, unless you actively go looking for it

theterrasque,

I’ve seen Skype do that. It was a weird folder name, but gallery found it and displayed the images.

Which is how I noticed it in the first place

theterrasque,

I wonder what bpm Moby’s thousand starts at… Maybe it can reach both limits

theterrasque,

Who are you?

What do you want?

Also, I think good and bad is a bit fluid there. It’s just people with different agendas. Well, except emperor Cartagia. And perhaps Bester.

theterrasque,

Yep, I usually make docker environments for cuda workloads because of these things. Much more reliable

theterrasque,

On occasion their strategy has been “if we send in enough people, they’ll eventually run out of bullets”

They out-Zapp Brannigan’ed Zapp Brannigan. That should terrify you on multiple levels

Michael Cohen cites AI-generated court cases in his defense (www.theverge.com)

Michael Cohen, the former lawyer for Donald Trump, admitted to citing fake, AI-generated court cases in a legal document that wound up in front of a federal judge, as reported earlier by The New York Times. A filing unsealed on Friday says Cohen used Google’s Bard to perform research after mistaking it for “a super-charged...

theterrasque, (edited )
theterrasque,

I gotta ask… were you around and actively using xmpp around that time?

Because I was. And xmpp struggling had nothing to do with Google

theterrasque,

He’s arguably a big enough target to actually worry about custom hardware modification attacks.

Should I move to Docker?

I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked...

theterrasque, (edited )

It’s a great tool to have in the toolbox. Might take some time to wrap your head around, but coming from vm’s you already have most of the base understanding.

From a VM user’s perspective, some translations:

  • Dockerfile = script to set up a VM from a base distro, and create a checkpoint that is used as a base image for starting up vm’s
  • A container is roughly similar to a running VM. It runs inside the host os, jailed, which account for it’s low overhead.
  • When a container is killed, every file system change gets thrown out. Certain paths and files can be mapped to host folders / storage to keep data between restarts.
  • Containers run on their own internal network. You can specify ports to nat in from host interface to containers.
  • Most service setup is done by specifying environment variables for the container, or mapping in a config file or folder.
  • Since the base image is static, and config is per container, one image can be used to run multiple containers. So if you have a postgres image, you can run many containers on that image. And specify different config for each instance.
  • Docker compose is used for multiple containers, and their relationship. For example a web service with a DB, static file server, and redis cache. Docker compose also handles things like setting up a unique network for the containers, storage volumes, logs, internal name resolution, unique names for the containers and so on.

A small tip: you can “exec” into a running container, which will run a command inside that container. Combined with interactive (-i) and terminal (-t) flags, it’s a good way to get a shell into a running container and have a look around or poke things. Sort of like getting a shell on a VM.

One thing that’s often confusing for new people are image tags. Partially because it can mean two things. For example “postgres” is a tag. That is attached to an image. The actual “name” of an image is it’s Sha sum. An image can have multiple tags attached. So far so good, right?

Now, let’s get complicated. The actual tag, the full tag for “postgres” is actually “docker.io/postgres:latest”. You see, every tag is a URL, and if it doesn’t have a domain name, docker uses it’s own. And then we get to the “: latest” part. Which is called a tag. Yup. All tags have a tag. If one isn’t given, it’s automatically set to “latest”. This is used for versioning and different builds.

For example postgres have tags like “16.1” which points to latest 16.1.x version image, built on postgres maintainers’ preferred distro. “16.1-alpine” that point to latest Alpine based 16.1.x version. “16” for latest 16.x.x version, “alpine” for latest alpine based version, be it 16 or 17 or 18… and so on. You can find more details here.

The images on docker hub are made by … well, other people. Often the developers of that software themselves, sometimes by docker, sometimes by random people. You can make your own account there, it’s free. If you do, make an image and pushes it, it will be available as shdwdrgn/name - if it doesn’t have a user component it’s maintained / sanctioned by docker.

You can also run your own image repository service, as long as it has https with valid cert. Then it will be yourdomain.tld/something

So that was a brief introduction to the strange World of docker. Docker is a for profit company, btw. But the image format is standardized, and there’s fully open source ways to make and run images too. At the top of my head, podman and Kubernetes.

theterrasque,

For the nfs shares, there’s generally two approaches to that. First is to mount it on host OS, then map it in to the container. Let’s say the host has the nfs share at /nfs, and the folders you need are at /nfs/homes. You could do “docker run -v /nfs/homes:/homes smtpserverimage” and then those would be available from /homes inside the image.

The second approach is to set up NFS inside the image, and have that connect directly to the nfs server. This is generally seen as a bad idea since it complicates the image and tightly couples the image to a specific configuration. But there are of course exceptions to each rule, so it’s good to keep in mind.

With database servers, you’d have that set up for accepting network connections, and then just give the address and login details in config.

And having a special setup… How special are we talking? If it’s configuration, then that’s handled by env vars and mapping in config files. If it’s specific plugins or compile options… Most built images tend to cast a wide net, and usually have a very “everything included” approach, and instructions / mechanics for adding plugins to the image.

If you can’t find what you’re looking for, you can build your own image. Generally that’s done by basing your Dockerfile on an official image for that software, then do your changes. We can again take the “postgres” image since that’s a fairly well made one that has exactly the easy function for this we’re looking for.

If you would like to do additional initialization in an image derived from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under /docker-entrypoint-initdb.d (creating the directory if necessary). After the entrypoint calls initdb to create the default postgres user and database, it will run any *.sql files, run any executable *.sh scripts, and source any non-executable *.sh scripts found in that directory to do further initialization before starting the service.

So if you have a .sh script that does some extra stuff before the DB starts up, let’s say “mymagicpostgresthings.sh” and you want an image that includes that, based on Postgresql 16, you could make this Dockerfile in the same folder as that file:


<span style="color:#323232;">FROM postgres:16
</span><span style="color:#323232;">RUN mkdir /docker-entrypoint-initdb.d
</span><span style="color:#323232;">COPY mymagicpostgresthings.sh /docker-entrypoint-initdb.d/mymagicpostgresthings.sh
</span><span style="color:#323232;">RUN chmod a+x /docker-entrypoint-initdb.d/mymagicpostgresthings.sh
</span>

and when you run “docker build . -t mymagicpostgres” in that folder, it will build that image with your file included, and call it “mymagicpostgres” - which you can run by doing “docker run -e POSTGRES_PASSWORD=mysecretpassword -p 5432:5432 mymagicpostgres”

In some cases you need a more complex approach. For example I have an nginx streaming server - which needs extra patches. I found this repository for just that, and if you look at it’s Dockerfile you can see each step it’s doing. I needed a bit of modifications to that, so I have my own copy with different nginx.conf, an extra patch it downloads and applies to the src code, and a startup script that changes some settings from env vars, but that had 90% of the work done.

So depending on how big changes you need, you might have to recreate from scratch or you can piggyback on what’s already made. And for “docker script to launch it” that’s usually a docker-compose.yml file. Here’s a postgres example:


<span style="color:#323232;">version: '3.1'
</span><span style="color:#323232;">
</span><span style="color:#323232;">services:
</span><span style="color:#323232;">  db:
</span><span style="color:#323232;">    image: postgres
</span><span style="color:#323232;">    restart: always
</span><span style="color:#323232;">    environment:
</span><span style="color:#323232;">      POSTGRES_PASSWORD: example
</span><span style="color:#323232;">
</span><span style="color:#323232;">  adminer:
</span><span style="color:#323232;">    image: adminer
</span><span style="color:#323232;">    restart: always
</span><span style="color:#323232;">    ports:
</span><span style="color:#323232;">      - 8080:8080
</span>

If you run “docker compose up -d” in that file’s folder it will cause docker to download and start up the images for postgres and adminer, and port forward in 8080 to adminer. From adminer’s point of view, the postgres server is available as “db”. And since both have “restart: always” if one of them crashes or the machine reboots, docker will start them up again. So that will continue running until you run “docker compose down” or something catastrophic happens.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • cubers
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • osvaldo12
  • ngwrru68w68
  • GTA5RPClips
  • provamag3
  • InstantRegret
  • everett
  • Durango
  • cisconetworking
  • khanakhh
  • ethstaker
  • tester
  • anitta
  • Leos
  • normalnudes
  • modclub
  • megavids
  • lostlight
  • All magazines