Access for All: Two friends helping change opportunities for blind people with an open-source screen reader for all. Now on Microsoft Unlocked: https://unlocked.microsoft.com/nvda/
Does anyone here have experience using Microsoft Access with any #ScreenReader? I'd love to know whether it just isn't accessible or I just haven't figured out what I'm doing yet. There's nothing in the help topics, the Freedom Scientific Youtube channel, or their training webinar page about it. Please boost. #Jaws#JFW#NVDASr#accessibility
This Global Accessibility Awareness Day, we have something special to share! @github made a movie about our founders Mick & Jamie & the story of not only NVDA, but also OSARA. Two life-changing open-source projects. Both actively providing access & employment to blind people around the world.
NVDA users that also use some form of eloquence, whether that be CF or IBMTTS or whatever. Add this to your voice dictionary.
Pattern: (.)[\ufe00-\ufe0f]
Replacement: \1
Comment: whatever (I call mine strip variation selectors)
type: regular expression.
This will suppress Unicode variation selectors, U+FE00-U+FE0F) which are used as combining characters with many emoji. This is responsible for eloquence adding a question mark intonation when encountering some emoji, especially involving gender and/or skin tone selection. This allows, for instance, 5 red hearts, which often include a variation selector, to be pronounced as 5 red heart rather than reading them all out with question marks in between.
❤️❤️❤️❤️❤️
Perhaps I should open an issue against NVDA about how this affects the symbol repetition algorithm, and maybe against IBMTTS to strip upon input so such an entry isn't needed. #NVDASR
This week's In-Process is out! Featuring the latest on NVDA 2024.1, add-on updates, the Microsoft FOSS Fund, Philanthropy Australia awards, an Interview with our GTO Gerald, and more! All available now at: https://www.nvaccess.org/post/in-process-22nd-april-2024/
To what degree does NVDA actually have support for TUIs? When I look at TUIs (terminal user interfaces) for tools like Joplin, Vim, practically anything that requires using arrow keys/tab within a terminal, it almost always is a horrible experience. Are there things TUI developers can do to better accommodate NVDA? @NVAccess@tspivey
Do terminal-first tools like TDSR in a WSL2 shell improve this at all? #nvdasr#screenreaders#accessibility
I just updated my screen reader, #NVDA, to version 2024.1.
A few notes of interest:
All add-ons I consider essential work. Remote isn't updated yet but TeleNVDA is.
Unspoken, which I maintain, hasn't been updated but can be force enabled and works. Some time this week I'll issue a new release.
Of the non-essential add-ons that I'd like to have but haven't tried forcing, I'd highlight Instant Translate and Calibre.
I've also made a donation. I encourage others to contribute to NVDA, with code, translations, time or money, if their situation permits. NVDA doesn't update itself. It's not only my favourite screen reader, but it's essential to keep our needs in focus, maintain our independence, and avoid monopoly.
NV Access is very pleased to share the release of the NVDA 2024.1 Release Candidate. Unless any major issues are identified, this will be identical to the final release. We encourage all users to test the Release Candidate. Many updates including on-demand speech mode, native selection in Firefox, bulk actions in the add-on store and much more! Read the full details and download at: https://www.nvaccess.org/post/nvda-2024-1rc1/
In-Process is out - live from Melbourne! Featuring news on NVDA 2024.1 Beta 10, the WebAIM screen reader user survey results, GitHub's accessibility survey and the results of our own Mastodon survey! Read it now at: https://www.nvaccess.org/post/in-process-29th-february-2024/
We had a question from someone asking, what Mastodon client for Windows do people use which is accessible with NVDA. I thought here might be the perfect place to post that question? What do YOU use, or how do you access Mastodon on PC? Please let us know!
Thanks for your patience with the beta glitch last week. We're pleased to advise we've got rid of those nasty bugs. NVDA 2024.1 Beta 10 is now available, and we promise the bugs in this version are much friendlier! Please do head to https://www.nvaccess.org/post/nvda-2024-1beta10/ for all the details & to download.
Just in case anyone is interested, and for the archives/searches, I recently asked if anyone had managed to use #LLM models to access #UEFI interfaces, or other interfaces without #A11Y, as a #blind user. The idea was to use a capture card to bring in the video information from the inaccessible machine, send pictures from that video stream to the LLM, and get descriptions/ask questions. This is how I did it. It's not pretty, but it's another helpful tool for the toolbox. It requires a video capture card, HDMI or display port to USB, the OpenAI #NVDA add-on, and a method of displaying the video from the capture card on screen. I tried four HDMI capture cards and all of them worked, I think the point is that the capture device should show up to Windows as a webcam. I haven't found a cheap capture device which didn't, the only reason I had to try four was that I was using audio input from the HDMI for another project and it's surprising how many devices will not receive the sound even in simple stereo. Anyhow, just searching for HDMI capture on google/amazon will probably get something to use. The Open AI NVDA Add-on is at https://github.com/aaclause/nvda-OpenAI/ The method I used to display the received video is at https://superuser.com/questions/1744688/how-can-i-view-the-video-coming-in-from-a-capture-card-on-windows-in-full-screen The steps are basically to put the puzzle pieces together. Set up the add-on with its instructions, copy and paste the HTML in the superuser link to a new HTML file, and open that HTML file in the browser. Having the file run from file explorer works fine, and firefox, at least, will ask for permission so make sure to allow it. Now, move the NVDA navigator cursor/focus to the video. Here, the object is called "document", the point is to avoid sending the entire screen, or even the firefox window. Having pressed the add-on command to capture the object, you will be placed in the prompt field and can ask any questions you like or rely on the default "describe this image" prompt. Generally, I will use the describe the image first and then ask follow-up questions or modify the image as best I can. Just a few tips. Maximizing the window and pressing the "full screen" button in firefox on the video appears to be helpful. The GPT 4 vision model does confabulate/hallucinate, and what it makes up is plausible. This is just another tool, not something to rely on exclusively. It is in addition to, rather than instead of, OCR, one's own knowledge, etc. The image is sometimes cut off, I'm not sure why this is but suspect at least some of it comes from its being displayed on the screen in the browser. I would welcome better ways to do this, as I said, it's not pretty and just what I could come up with in a few minutes of searching and with some trial/error. Having said that, it is a small step forward. Note that, as one would expect, the method also works to bring in pictures from a standard webcam. #nvdasr#ScreenReader