embedded, to embedded
@embedded@mstdn.social avatar

This week, Kwabena Agyeman spoke with Elecia( @logicalelegance ) and Chris( @stoneymonster ) about optimization, cameras, machine learning, and vision systems.

Join us for another informative episode of Embedded here: https://embedded.fm/episodes/477 .

These are alt titles that were in the running for show title:

  • RISC: One Thousand New Instructions
  • Pixels on Wasp
  • Bee FOMO

Which would you have gone for?

embedded, to embedded
@embedded@mstdn.social avatar

Kwabena Agyeman spoke with Elecia( @logicalelegance ) and Chris( @stoneymonster ) about optimization, cameras, machine learning, and vision systems.

Join us for another informative episode of Embedded here: https://embedded.fm/episodes/477 .

opencv, to ai
@opencv@mastodon.social avatar

New on the OpenCV blog, an OpenCV 5 development progress update from the core team. Recent work includes new samples, ONNX improvements, documentation improvements, HAL and G-API work, and a lot more.

Read the post for the whole story https://opencv.org/blog/opencv-5-progress-update-may-9-2024/

janriemer, to rust

If you're not yet convinced of 's strengths, you should give this talk a watch:

RustConf 2023 - Rust in the Wild: A Factory Control System from Scratch:
https://farside.link/https://www.youtube.com/watch?v=TWTDPilQ8q0
(or YT: https://www.youtube.com/watch?v=TWTDPilQ8q0)

Absolutely amazing presentation! So much ! I love it! ❤️

albertcardona, to Neuroscience
@albertcardona@mathstodon.xyz avatar

"Noise limits of event cameras" AKA event-based silicon retinas. A talk by Tobi Delbruck in Cambridge, UK, in March 25, 2024.

https://www.youtube.com/watch?v=YY31GaiOkNM

The screenshot below features Patrick Lichtsteiner and his work on mimicking retinal circuits in the design of the dynamic vision sensor (DVS), an event-based camera where the log difference of light intensity at time t and t-1 is emitted (the event), rather than a typical camera frame. This has extraordinary implications for visual processing, data transfer bandwidth and data storage.

albertcardona,
@albertcardona@mathstodon.xyz avatar

Having determined that the DVS pixel noise is limited to 2x the shot noise, Tobi's group built a "Scientific DVS" targetting e.g., very fast imaging of neural activity with low noise. They've done it by tweaking the DVS pixel circuit and also binning 4 pixels together for spatial integration.

The result: 10x more sensitive.

Looking forward to seeing applications in neuronal activity imaging, which seems ideally suited for event-based imaging: large fields of view where largely nothing changes, with few, very sparse but fast changing pixels – where neurons are active.

image/png

collabora, to machinelearning
@collabora@floss.social avatar

While many companies have built their #machinelearning analysis framework around #GStreamer, no one had made the effort to contribute upstream, until now. Introducing the @gstreamer Analytics Metadata Framework: https://col.la/gstanalytics

#OpenSource #DataAnalytics #ComputerVision

SimonGarnier, to random

Rvision 0.8.0 is here! Its toolbox now includes camera calibration tools, ORB keypoint detection and matching, pyramid resampling, and much more. It can also work in combination with GStreamer pipelines to read feeds from a wide range of cameras. More info at https://swarm-lab.github.io/Rvision/index.html

gisgeek, to foss
@gisgeek@floss.social avatar

A due to myself. I'm an old-style with more than 30 years of experience with multiple flavors of *nix, networking, , and development. I'm a researcher and data scientist in the and domains, with a background in and . I'm also a Developer, a full-stack dev, and a system administrator, because of my geekness. I'm a type of CS professional that is currently almost extinct, I would say...

itnewsbot, to photonics
@itnewsbot@schleuss.online avatar

Scientists make non-toxic quantum dots for shortwave infrared image sensors - Enlarge / Vials of quantum dots with gradually stepping emission from v... - https://arstechnica.com/?p=1996499

floe, to random
@floe@hci.social avatar

Question for the folks (plz RT):

I'm trying to calibrate a camera with obvious radial lens distortion that happens to view a big screen.

As I'm a lazy bum, I don't want to wave a calibration board around in front of the camera, and instead created the video below.

While this plays on the big screen, I just have the camera take a picture every two seconds, take the pile of images and can then calculate the calibration parameters with the usual @opencv methods.

However...

video of an simulated ARUCO calibration board in various poses, in front of a white background

itnewsbot, to ArtificialIntelligence
@itnewsbot@schleuss.online avatar
pac, to OSINT
@pac@mastodon.social avatar

À , on a pour mission de protéger le débat public français contre les manipulations de l'information opérées par des acteurs étrangers. L'équipe comprend à la fois des analystes géopolitiques, des spécialistes et des data scientists.

Si vous voulez venir bosser avec moi, j'ai ouvert deux postes dans mon équipe :

N'hésitez pas à faire passer.

pac,
@pac@mastodon.social avatar
Lottie, to OpenAI
@Lottie@tooters.org avatar

I can honestly say that has made my life better this year in a small, but significant way. 💖 I can ‘share’ any image on my iPhone to and almost instantly get a very detailed description back in seconds. 📱✨
It might not always be completely accurate, but believe me when I say it is the single biggest thing to help in 38 years of being blind! 🌟

at, to random
@at@sigmoid.social avatar

Two stereo and / or multiple view reconstruction paper on arXiv today. I may or may not read them later : )
DUSt3R: Geometric 3D Vision Made Easy
Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, Jerome Revaud

abs: http://arxiv.org/abs/2312.14132

albertcardona, to random
@albertcardona@mathstodon.xyz avatar

There aren't that many research projects where the ground truth is delicious and you should eat it before it spoils:

"Deep grading of mangoes using Convolutional Neural Network and Computer Vision", Gururaj et al. 2022 https://link.springer.com/article/10.1007/s11042-021-11616-2

opencv, to ai
@opencv@mastodon.social avatar

DEADLINE EXTENDED-- we're pushing our campaign to keep OpenCV free to all into overtime with a 15-day extension to the original campaign.

Back us now: http://igg.me/at/opencv5

It's a very important time to show your support for open source computer vision and AI, so we can keep the future free for everyone and not locked up in private vaults at big companies.

itnewsbot, to ArtificialIntelligence
@itnewsbot@schleuss.online avatar

Chatbot Hype or Harm? Teens Push to Broaden A.I. Literacy - Students at a New Jersey high school want to widen A.I. discussions beyond dueling tropes... - https://www.nytimes.com/2023/12/13/technology/ai-chatbots-schools-students.html (k-12)

Crazypedia, to random
@Crazypedia@pagan.plus avatar

This is so cool, you just have to watch it :sticker_pika_wow:

Watch ""

https://youtu.be/NSS6yAMZF78?si=Dp7ntsLAYhsljmkt

itnewsbot, to ArtificialIntelligence
@itnewsbot@schleuss.online avatar
itnewsbot, to ArtificialIntelligence
@itnewsbot@schleuss.online avatar
Anoncheg, to machinelearning
@Anoncheg@emacs.ch avatar

CV task is finished. :) I successfully solved captcha.
I achieved accuracy 88%, when 90% was required by the company.
They didn't accept my work and I was not hired. ☄️
As I assumed, others have been using neural networks, while I used OpenCV only.
I just don't have GPU. ⛔

Here is steps how I made it:🐱

  1. Found two rectangles at hint image for hints with help of cv.findContours
    and collected statistic about max/min position of them.
  2. Fixed angle for main image with help of cv.HoughLinesP
  3. Found train position on main images by HSV colour range.
  4. Found subimages points on first main image with help of cv.SIFT and cv.FlannBasedMatcher.
  5. Calcuate hint position by clustering points with help of cv.kmeans automatic clustering.
  6. Calculate closes train by x and by y coordinates to found hint subimages.
    🥁

    😶
chrisoffner3d, to iPhone

The iPhone’s computational photography algorithm choose different poses for each of this lady’s mirror reflections. Spooky. 😅

https://petapixel.com/2023/11/16/one-in-a-million-iphone-photo-shows-two-versions-of-the-same-woman/

ingo, to ai

We’re hiring a Reader (Associate Professor equivalent) in AI at the University of Wolverhampton. More info at https://jobs.wlv.ac.uk/vacancy/reader-in-artificial-intelligence-541125.html

ramikrispin, to ArtificialIntelligence
@ramikrispin@mstdn.social avatar

(1/2) Introduction to the YOLO-NAS Pose 🚀

The 𝐘𝐎𝐋𝐎-𝐍𝐀𝐒 𝐏𝐨𝐬𝐞 is a new open-source object detection foundation model by Deci AI. It provides the same functionality as the 𝐘𝐎𝐋𝐎𝐯8 model with significant performance improvement in latency. The model is available with the super-gradients Python package.

#DeepLearning #ComputerVision #OpenSource #YOLO #MachineLearning #python

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • Leos
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • ethstaker
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • ngwrru68w68
  • kavyap
  • cubers
  • anitta
  • modclub
  • InstantRegret
  • khanakhh
  • GTA5RPClips
  • everett
  • Durango
  • tacticalgear
  • provamag3
  • tester
  • cisconetworking
  • normalnudes
  • megavids
  • lostlight
  • All magazines