Yeah it’s neat but I never know what to do with this stuff if you made it. Like, I’m not going to go full on “stuff made here”. Hard to see the practical value.
Jiggling your mouse to keep your status from going to Idle
Nah but ever since I learned how to program industrial robots at my job, I’ve dreamt of a 100% automated coffee making system that has a scientifically perfect cup of coffee waiting for me when I come down the stairs in the morning
I have not built such a device. The programming skill is a big first step to understanding the potential applications. Assuming the accuracy shown is replicable, the example joinery and mechanics is invaluable. I have tried making low lash joints with 3d printed parts. This was not an easy problem for me to solve from scratch. From my experience mostly making functional prints and mechanisms, a project with the results claimed here looks like an enormous project with a ton of engineering. It is the kind of thing where I would pick up a lot of design ideas and principles.
Lastly, assuming the final product is durable enough, it changes the time/value balance for repeatable tasks; so anything can become cheap. The first thing that I think of is sawing. With a basic hacksaw blade you can cut through almost anything just by repetitive motion. It becomes possible to cut through almost anything that would be impractical normally due to effort and time. This kind of machine really benefits from getting paired with other machines. It could even be used with a 3d printer to replace build plates or remove prints.
Manipulating with objects or serving 3D printer sounds like perfect task for robotoc arm, but cutting or machining not really. Its heavy work and it would be more suitable for cartesian XYZ machine. I have no real experience with robots in production, but Ive seen super expensive robots serving other machines and failing quite often.
That arm could wipe nozzle just before print starts, open/close cabinet door when needed, move camera in space for some crazy timelapse, maybe 3D scan model on build plate, photo session shooting of every model you printed or if you really get mad it could also switch extruders for multi material setup. It would be awesome to have that at home, shame I dont have knowledge, time and money to accomplish that 🥺 maybe in the future
That is spooky. Of course I had to go look it up. Contrary to that first part of the Wikipedia page, this is no longer conspiracy. It’s becoming the reality.
I would ignore the people who say you should deploy a model from someone else as that will teach you next to nothing about how this stuff works.
I would start with an older model and framework (e.g. scikitlearn) and go through all the processing, prediction, and evaluation steps using a model that’s fairly simple to understand. Since you already know about linear regression, start with some of these linear models.
Then, and only then, would I worry about neural networks and deep learning, since the main difference is a non-linear activation function and a much more complicated set of weights (model parameters in the linear regression language).
You’re right. I read past the “I want to learn ML” and went straight to “do something useful with the data”.
If the goal is to understand how modern LLMs work, it’s also good to read up on RNNs and LSTMs. For this, 3Blue1Brown does an amazing job, and even posted an in-depth video about transformers. I’d watch that next, followed by implementing a simple transformer in PyTorch (perhaps using the existing blocks).
You could argue that it’s important to design everything from scratch first, but it’s easier to first go high level, see how the network behaves, and then attempt to implement it yourself based on the paper. It is up to OP how comfortable he is with the topic though 😁
Depending on how much compute you have available, you can look into finetuning models from HuggingFace (e.g. Llama 3, or a smaller Phi model). Look into LoRA, and try to learn how the model you choose calculates the loss.
There are various ways to train, and usually involves masking the input by replacing random input tokens with the mask token. I won’t go into too much detail with this, because it’s a lot to explain, and I suggest you read an article on this (link1 or link2)
That’s a great starting point! Your scraped data from Threads and Instagram can be a valuable resource for exploring AI/ML. Here’s a general roadmap to get you started:
Understand Your Data: Before diving into AI/ML models, it’s crucial to understand your data. Analyze the content you scraped from Threads and Instagram. What format is it in (text, images, videos)? What kind of information does it contain (captions, comments, user data)?
Choose an AI/ML Approach: Based on your data and goals, you can explore different AI/ML techniques. Here are some options to consider:
Text Analysis: If your data is text-heavy, you can use natural language processing (NLP) to analyze sentiment, topics, or emerging trends. -Image Recognition: If you have a lot of images, you can use computer vision to identify objects, scenes, or classify images based on their content.
Start Simple: Begin with well-established algorithms like linear regression or decision trees. These can provide valuable insights without requiring deep learning expertise.
Utilize Online Resources: There are plenty of online tutorials and courses that can introduce you to AI/ML concepts. Platforms like Google Colab offer free computing resources to experiment with code. Remember, this is an ongoing learning journey. Start with small steps, explore different resources, and don’t be afraid to experiment!
It had some functionalities that nnn did not have like displaying processes or favourite directories and such. In the end I got back to nnn because I read that superfile had internet access plus the fact that I use a graphical file manager for things that nnn or many terminal file managers can not do with extensive plugins.
Uhm both displaying copy/move process and having shortcuts for “favourite” dirs is quite possible with nnn. Although for the later I mostly use -S argument for persistent session.
The only drawback of nnn in my book is the kind of weird/cumbersome way to configure it eith ENV variables. And the non-existent preview image display under wayland.
Yeah, having to customize with env variables is not great, and adding bookmarks is much easier in superfile. Anyway I suposse one does not set bookmarks to often. Plus nnn was so fast I just tapped they keys to get to the directory I needed easily. Once I learned most shortcuts I was flying trough operarions.
It wanted to download a zip file. Apparently it was a theme. But, I’m not letting a local file manager talk to the internet randomly. If I want to update it, I’ll update it myself. Or, at least provide an option to enable it on first run.
raw.githubusercontent.com
Hot