@andrew0@lemmy.dbzer0.com
@andrew0@lemmy.dbzer0.com avatar

andrew0

@andrew0@lemmy.dbzer0.com

This profile is from a federated server and may be incomplete. Browse more on the original instance.

andrew0,
@andrew0@lemmy.dbzer0.com avatar

What db2 already said. Microsoft just released Phi-3 mini, which could, allegedly, run locally on newer smartphones.

If I understood correctly, the Rabbit thingy just captures your information locally and then forwards it to their server. So, if you want more power, you could probably do the same by submitting the same info to a bigger open source model than Phi-3, like Llama 3, hosted on your homelab. I believe you can set it up with huggingface/gradio, which sort of provides an API that you could use.

That way, you don’t need a shitty orange box, and can always get the latest open source models with a few lines of code. There are plenty of open source frameworks in the works at the moment, and I believe that we’re not far off from having multi-modal LLMs running on homelab-level hardware (if you don’t mind a bit of lag).

andrew0,
@andrew0@lemmy.dbzer0.com avatar

Good luck! You can try the huggingface-chat repo, or ollama with this web-ui. Both should be decent, as they have instructions to set up a docker container.

I believe the Llama 3 models are out there in a torrent somewhere, but I didn’t dig to find it. For the 70B model, you’ll probably need around 64GB of RAM available, but the 7B one should run fine with just 8GB. It will be somewhat slow though, compared to the ChatGPT experience. The self-attention mechanism can be parallelized, which is why you will see much better results on a GPU. According to some others that tested it, if you offload some stuff to RAM, you could see ~10-12 tokens per second on an RTX 3090 for certain 70B models. But more capable ones will be at less than 1 token per second, all depending on the context window you use.

If you don’t have a GPU available, just give the Phi-3 model a try :D If you quantize it to 4 bits, it can apparently get 12 tokens per second on an iPhone haha. It should play nice with pooling information from a search engine, or a vector database like milvus, qdrant or chroma.

andrew0,
@andrew0@lemmy.dbzer0.com avatar

Entry 30 looks so clean. It’s a pretty smart design.

andrew0,
@andrew0@lemmy.dbzer0.com avatar

That is good to know. Tried the free version of Roll20 before, and it definitely felt lacking in certain areas. Oh, and thanks for letting me know about the sale! I’ll definitely keep an eye out for that one :)

andrew0,
@andrew0@lemmy.dbzer0.com avatar

How will you move to WhatsApp if everyone else uses iMessage? Europe has the same issue, but reversed. Everyone uses WhatsApp and can’t jump to Signal/Telegram because they’re not as popular.

andrew0,
@andrew0@lemmy.dbzer0.com avatar

With the way current LLMs operate? The short answer is no. Most machine learning models can learn the probability distribution by performing backward propagation, which involves “trickling down” errors from the output node all the way back to the input. More specifically, the computer calculates the derivatives of each layer and uses that to slowly nudge the model towards the correct answer by updating the values in each neural layer. Of course, things like the attention mechanism resemble the way humans pay attention, but the underlying processes are vastly different.

In the brain, things don’t really work like that. Neurons don’t perform backpropagation, and, if I remember correctly, instead build proteins to improve the conductivity along the axons. This allows us to improve connectivity in a neuron the more current passes through it. Similarly, when multiple neurons in a close region fire together, they sort of wire together. New connections between neurons can appear from this process, which neuroscientists refer to as neuroplasticity.

When it comes to the Doom example you’ve given, that approach relies on the fact that you can encode the visual information to signals. It is a reinforcement learning problem where the action space is small, and the reward function is pretty straight forward. When it comes to LLMs, the usual vocabulary size of the more popular models is between 30-60k tokens (these are small parts of a word, for example “#ing” in “writing”). That means, you would need a way to encode the input of each to feed to the biological neural net, and unless you encode it as a phonetic representation of the word, you’re going to need a lot of neurons to mimic the behaviour of the computer-version of LLMs, which is not really feasible. Oh, and let’s not forget that you would need to formalize the output of the network and find a way to measure that! How would we know which neuron produces the output for a specific part of a sentence?

We humans are capable of learning language, mainly due to this skill being encoded in our DNA. It is a very complex problem that requires the interaction between multiple specialized areas: e.g. Broca’s (for speech), Wernicke’s (understanding and producing language), certain bits in the lower temporal cortex that handle categorization of words and other tasks, plus a way to encode memories using the hippocampus. The body generates these areas using the genetic code, which has been iteratively improved over many millennia. If you dive really deep into this subject, you’ll start seeing some scientists that argue that consciousness is not really a thing and that we are a product of our genes and the surrounding environment, that we act in predefined ways.

Therefore, you wouldn’t be able to call a small neuron array conscious. It only elicits a simple chemical process, which appears when you supply enough current for a few neurons to reach the threshold potential of -55 mV. To have things like emotion, body autonomy and many other things that one would think of when talking about consciousness, you would need a lot more components.

andrew0,
@andrew0@lemmy.dbzer0.com avatar

I got NFS Most Wanted (2005) working in Wine, and was somewhat impressed how easy it was at the time. Game worked quite well, and would only crash once in a while with some cryptic errors that I don’t remember. Made me hopeful for the future of linux gaming :)

andrew0,
@andrew0@lemmy.dbzer0.com avatar

And for some reason you still can’t charge transport cards online or with a credit/debit card if you don’t have a japanese phone. Think that’s coming in 2035 at this rate? 🤣

andrew0,
@andrew0@lemmy.dbzer0.com avatar

Yeah, it’s the Osaifu-Keitai. Apple has it enabled for all phones on the market, while Android phone manufacturers avoid adding it to theirs outside Japan because they would have to pay fees to Sony for it. The funny part is that Sony itself doesn’t enable it for phones outside Japan, even though FeliCa is a subsidiary of Sony :D Another funny bit is that some phones, like the Pixel, are capable of running it on phones made for other markets. Some users were able to force the Osaifu-Keitai app to think the phone was made in Japan, and that was all it took to enable it (although you’d have to root your phone + the manufacturer should have released their phones in Japan, to ensure the chip is capable). So, yeah, although a few years ago it might have been a specific chip being needed in the phone, nowadays it’s mostly software that doesn’t allow you to use the one you have while in Japan.

All in all, PASMO/Suica/etc is basically a very limited debit card company haha. I guess Japanese people enjoy using it mainly because it puts a cap on how much they can spend (iirc, about 100 euros allowed at once on the card). Japan is a highly consumerist society, so this format was probably adopted (instead of credit/debit cards) mainly to combat it somewhat :D

andrew0,
@andrew0@lemmy.dbzer0.com avatar

Job market seems terrible right now, especially if you’re just starting out. Had a friend that applied to ~100 jobs in tech, and a majority of them didn’t even reply back.

But the same can be said about bad HR as well. How many hiring teams have no idea what a candidate is supposed to be doing?

andrew0,
@andrew0@lemmy.dbzer0.com avatar

Got me a dedicated crew I play with, for the past 3-4 years or so. The game has been stale for us for a while this year, but it seems it’s getting better.

PvP problems have been getting better lately, with hitreg fixes and server side hit confirmation, but the community can definitely be toxic sometimes. Most old school players that enjoyed PvP were somewhat pushed out by Rare when they removed Arena and shifted the focus a bit away from fighting to PvE. This led to many players being a bit bitter about the new direction of the game, and being toxic in adventure. Hourglass maybe fixed this a bit, but it is very annoying to sink and have to spend 30 more minutes getting supplies to fight again. This, paired with Rare catering the PvE aspect a lot more lately (plus a really dry 2023 in terms of content) led to many (sort of young) players going in adventure with the intention to troll others.

Safer Seas maybe fixed this, but it also signals to every player now that you can’t be that new if you’re in High Seas, which made a lot of players I know change their mentality from “see what they’re up to” to “shoot them if they approach”. Gone are the days when you would see a ship and just approach them and say “hey buddy, need help figuring things out?” haha

Long story short, it is a really nice game that will keep you entertained for thousands of hours. Don’t run away from fights, and keep in mind that the booty is not yours until you sell it :D The game has its fair share of bugs, and sometimes you will lose a fight just because of server issues, but don’t let that discourage you from playing. If you need any help, I’m sure a lot of us here can try to offer a hand :)

Have fun on the seas!

andrew0,
@andrew0@lemmy.dbzer0.com avatar

They do have some matchmaking, at least for hourglass and really early on in your voyages. But those get disregarded as soon as the player pool in your region dries off. Sea of Thieves has had some player retention issues, and the Steam data suggests that the game has been bleeding players for the past 2-3 years. Of course, there are still players on Gamepass and MS store, but I’m expecting those markets have had the same decline.

My crew and I have been put in fights against streamers, day one players, and other sweats when we were just playing casually. It’s fun though, and the feeling you get when you beat those kinds of crews is amazing. However, when you lose, it’s really annoying because you can’t just hop back into the action (as I mentioned above, 30 mins of resupplying). Rare could have made some really nice game modes like Arena, for example using the Sea Dogs Tavern as a TDM point (i.e., a sort of king of the hill, or capture the flag), 4-5 smaller ships against a bigger crew manning the Black Pearl, and perhaps more to pack in a rotating game mode. That would have solved some of the issues the PvP community is facing, and adventure mode would have been a bit fresher.

Regardless, I wouldn’t worry too much about this if you’re just starting out. You’ll learn lots of things from crews that have been sailing the seas for longer :D If you’re having trouble though and you’re from Europe, we have a discord of chill people that play during the weekends. We’ll gladly give you a hand :)

andrew0,
@andrew0@lemmy.dbzer0.com avatar

Wow, some of the comments on that article saying Google should have made Android closed source are mindboggling. They realize they never would have had their current worldwide marketshare if they did that, no?

But maybe if they did, we would have had more people working on true linux phones 🤔 I’m a bit torn on this one haha.

andrew0,
@andrew0@lemmy.dbzer0.com avatar

The Framework 13 inch model should be plenty, especially if you want to dev on the go. Much more lightweight and smaller, and you can connect it to external monitors if the screen size is not big enough. Also, you shouldn’t have issues running Linux on either laptops.

Instead of going for the 16 version, I would use the extra 900-1000 euros (that’s the amount I saw I could save between the two almost maxed-out models) to make a dedicated server or mini-cluster to run your workloads. Deploy Kubernetes or Proxmox on it, and you’ll also get some more practice on it outside work if you want to run stuff for your home lab. That is only if you don’t want to game on your laptop, but I’d still put that money aside to make a desktop.

andrew0,
@andrew0@lemmy.dbzer0.com avatar

Yeah, you should be able to. You just need a suitable controller board for it. I recommend you take a look at a discussion about this on their forums:

community.frame.work/t/…/21

andrew0,
@andrew0@lemmy.dbzer0.com avatar

What a coincidence haha! Happy new year from an Andrew to another! :D

andrew0,
@andrew0@lemmy.dbzer0.com avatar

He’s the chosen one, of course it’s supposed to look like that!

andrew0,
@andrew0@lemmy.dbzer0.com avatar

We’re slowly getting to a working, stable Linux for phones :o Can’t wait to see the ricing scene on mobile haha

andrew0,
@andrew0@lemmy.dbzer0.com avatar

Cheats nowadays don’t even need to run on your machine. You can get a second computer that is connected to your computer via a capture card, analyze your video feed with an AI and send mouse commands wirelessly from it (mimicking the signal for your USB receiver).

These anti-cheats are nothing more than privacy invasion, and any game maker that believes they have the upper hand on people that want to cheat are very wrong.

Opening up anti-cheat support for Linux would at least make them more creative at finding these people from their behaviour, and not from analysing everything that’s running in the background.

andrew0,
@andrew0@lemmy.dbzer0.com avatar

It’s amazing that Linux gaming is becoming a thing that’s better sometimes than Windows gaming (minus the getting banned part in some games). I also like that AMD is making some big pushes on open source drivers, plus their ROCm open-source alternative to CUDA.

This is a great time for Linux users! :)

andrew0,
@andrew0@lemmy.dbzer0.com avatar

Same. It sucks that most banks wouldn’t jump on this train :(

andrew0,
@andrew0@lemmy.dbzer0.com avatar

What a stupid article. It’s like saying “stop using electric vehicles because you can’t use gas stations”. I don’t understand why he’s so adamant about this? It’s not like Wayland had about 20 years of extra time to develop like X11. People keep working on it, and it takes time to polish things.

andrew0,
@andrew0@lemmy.dbzer0.com avatar

If I understand correctly, this would just provide the binary for paru without really making it accessible to the user, no?

In the end, I just decided to copy all my dotfiles to /home/user and running an additional script after reboot. For some reason, if you try to arch-chroot and install it tries to build everything as root (even after providing “-u user” to arch-chroot).

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • thenastyranch
  • mdbf
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • Youngstown
  • cisconetworking
  • slotface
  • vwfavf
  • everett
  • Durango
  • rosin
  • tester
  • GTA5RPClips
  • khanakhh
  • osvaldo12
  • ngwrru68w68
  • normalnudes
  • anitta
  • ethstaker
  • cubers
  • modclub
  • tacticalgear
  • provamag3
  • Leos
  • JUstTest
  • All magazines