New on the OpenCV blog, an OpenCV 5 development progress update from the core team. Recent work includes new samples, ONNX improvements, documentation improvements, HAL and G-API work, and a lot more.
Generative AI is not just teaching cyber bad guys new tricks — it’s also making it easier for anyone to become a bad guy, according to Cybersecurity and Infrastructure Security Agency (CISA) chief Jen Easterly.
“I look at AI: how fast it’s moving, how unpredictable it is, how powerful it is,” Easterly told @AxiosNews. “I think it’ll make people who are less sophisticated actually better at doing some of the bad things that they want to do.” Here’s more from the interview.
"Consumer AI is just the new search" anecdote: [1/3]
Casual non-techy coworkers yesterday were talking about using excel reports to analyze data & turns out two of the people use #ChatGPT to know how to do something in excel.
So, before this #AI stuff, if you were like, "how do I do X in Excel" in google, you'd get a bunch of hits and then have to wade through the results to see which link was actually what you were looking for, then test out if their solution works.
"Consumer AI is just the new search" anecdote: [2/3]
Now, you do the same thing, but in #ChatGPT, even just upload a photo of what you are trying to do, and instead of a bunch of dubious links, you get an answer that probably works. That probability is just as good as the probability after combing through search results yourself, but without the combing.
"Consumer AI is just the new search" anecdote: [3/3]
There are over 1 billion websites with over 30 billion web pages out there on the internet and regular search absolutely sucks now. It's no wonder normies are seeing #ChatGPT as magic when it can take 30 billion+ results and give you one answer that's most likely what you are looking for.
#Apple didn’t read the room with this ad about creative tools literally being crushed to make its new #iPad. Creators were already up in arms over generative #AI and nervous about rumors that the company was planning to do something with it.
Many people canceled their Openai subscriptions, or it is tough to monetize stuff created with generative AI, I guess, so Sama comes with a new plan to use all those GPUs. They are now going after OF models. WTF OpenAI? They are going to allow deepfake? This company is beyond evil 👿
So, how can we get a proper answer? Ten years ago, when I wrote “The Milwaukee Soup App”, I used the Kimono (which is long dead) to scrape the soup of the day. You could also write a fiddly script to scrape the value manually. It turns out that there is another option, though. You could use Scrapegraph-ai. ScrapeGraphAI is a web scraping Python library that uses LLM and direct graph logic to create scraping pipelines for websites, documents, and XML files. Just say which information you want to extract and the library will do it for you.
Let’s take a look at an example. The project has an official demo where you need to provide an OpenAI API key, select a model, provide a link to scrape, and write a prompt.
As you can see, it reliably gives you the flavor of the day (in a nice JSON object). It will go even further, though because if you point it at the monthly calendar, you can ask it for the flavor of the day and soup of the day for the remainder of the month and it can do that as well.
I am running Python 3.12 on my Mac but when you run pip install scrapegraphai to install the dependencies, it throws an error. The project lists the prerequisite of Python 3.8+, so I downloaded 3.9 and installed the library into a new virtual environment.
Let’s see what the code looks like.
You will notice that just like in yesterday’s How to build a RAG system post, we are using both a main model and an embedding model.
At this point, if you want to harvest flavors of the day for each location, you can do so pretty simply. You just need to loop through each of Culver’s location websites.
Have a question, comment, etc? Please feel free to drop a comment, below.
@bagder
Great action 👍🏻. My hope is that we get more specialized pages/blogs, instead of those central places that sooner or later get way too much power. Especially if that power is based on contributions by the community. For this reason, I decided to revamp my blog (https://linux-audit.com/), specialize, and still allow #AI to crawl it. After all, if it continues to exist, I rather want it to use knowledge of higher quality.