This week, Kwabena Agyeman spoke with Elecia( @logicalelegance ) and Chris( @stoneymonster ) about optimization, cameras, machine learning, and vision systems.
New on the OpenCV blog, an OpenCV 5 development progress update from the core team. Recent work includes new samples, ONNX improvements, documentation improvements, HAL and G-API work, and a lot more.
The screenshot below features Patrick Lichtsteiner and his work on mimicking retinal circuits in the design of the dynamic vision sensor (DVS), an event-based camera where the log difference of light intensity at time t and t-1 is emitted (the event), rather than a typical camera frame. This has extraordinary implications for visual processing, data transfer bandwidth and data storage.
Having determined that the DVS pixel noise is limited to 2x the shot noise, Tobi's group built a "Scientific DVS" targetting e.g., very fast imaging of neural activity with low noise. They've done it by tweaking the DVS pixel circuit and also binning 4 pixels together for spatial integration.
The result: 10x more sensitive.
Looking forward to seeing applications in neuronal activity imaging, which seems ideally suited for event-based imaging: large fields of view where largely nothing changes, with few, very sparse but fast changing pixels – where neurons are active.
Rvision 0.8.0 is here! Its toolbox now includes camera calibration tools, ORB keypoint detection and matching, pyramid resampling, and much more. It can also work in combination with GStreamer pipelines to read feeds from a wide range of cameras. More info at https://swarm-lab.github.io/Rvision/index.html
A due #introduction to myself. I'm an old-style #geek with more than 30 years of experience with multiple flavors of *nix, networking, #FOSS, and development. I'm a researcher and data scientist in the #geospatial and #remotesensing domains, with a background in #computervision and #HPC. I'm also a #Debian Developer, a full-stack dev, and a system administrator, because of my geekness. I'm a type of CS professional that is currently almost extinct, I would say...
I'm trying to calibrate a camera with obvious radial lens distortion that happens to view a big screen.
As I'm a lazy bum, I don't want to wave a calibration board around in front of the camera, and instead created the video below.
While this plays on the big screen, I just have the camera take a picture every two seconds, take the pile of images and can then calculate the calibration parameters with the usual @opencv methods.
À #VIGINUM, on a pour mission de protéger le débat public français contre les manipulations de l'information opérées par des acteurs étrangers. L'équipe comprend à la fois des analystes géopolitiques, des spécialistes #osint et des data scientists.
Si vous voulez venir bosser avec moi, j'ai ouvert deux postes dans mon équipe :
I can honestly say that #OpenAI has made my life better this year in a small, but significant way. 💖 I can ‘share’ any image on my iPhone to #BeMyEyes and almost instantly get a very detailed description back in seconds. 📱✨
It might not always be completely accurate, but believe me when I say it is the single biggest thing to help in 38 years of being blind! 🌟 #Accessibility#AI#AISoftheBlind#Blind#ComputerVision#Disability#GPT4#Innovation#MachineLearning
Two stereo and / or multiple view reconstruction paper on arXiv today. I may or may not read them later : )
DUSt3R: Geometric 3D Vision Made Easy
Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, Jerome Revaud
It's a very important time to show your support for open source computer vision and AI, so we can keep the future free for everyone and not locked up in private vaults at big companies.
CV task is finished. :) I successfully solved captcha.
I achieved accuracy 88%, when 90% was required by the company.
They didn't accept my work and I was not hired. ☄️
As I assumed, others have been using neural networks, while I used OpenCV only.
I just don't have GPU. ⛔
Here is steps how I made it:🐱
Found two rectangles at hint image for hints with help of cv.findContours
and collected statistic about max/min position of them.
Fixed angle for main image with help of cv.HoughLinesP
Found train position on main images by HSV colour range.
Found subimages points on first main image with help of cv.SIFT and cv.FlannBasedMatcher.
Calcuate hint position by clustering points with help of cv.kmeans automatic clustering.
The 𝐘𝐎𝐋𝐎-𝐍𝐀𝐒 𝐏𝐨𝐬𝐞 is a new open-source object detection foundation model by Deci AI. It provides the same functionality as the 𝐘𝐎𝐋𝐎𝐯8 model with significant performance improvement in latency. The model is available with the super-gradients Python package.