@sachac@emacs.ch
@sachac@emacs.ch avatar

sachac

@sachac@emacs.ch

Interests include: #Emacs, #OrgMode, #elisp, #nodejs, #python, #sketchnotes, #parenting, #cooking, #gardening, #knitting, #sewing, #lego, #captioning, #plover #steno, and #stoic philosophy. Originally from Manila, now in Toronto. Married to a Vim guy (go figure) and raising a 7-year old (editor preference unknown), along with two very loud cats.

Blog: https://sachachua.com (mostly Emacs News these days), sketches: https://sketches.sachachua.com. I also maintain planet.emacslife.com and subed.el

This profile is from a federated server and may be incomplete. Browse more on the original instance.

sachac, to emacs
@sachac@emacs.ch avatar

It was super easy to use pavucontrol switch the output of my BigBlueButton test web conference room into a PulseAudio null sink that I had previously created, and then use the monitor of that sink as the input to my live speech transcription setup so that I could get it into Emacs. I don't have to mess around with mic + system audio setup for that one. I just need to join the web conference in a separate browser (might even be just a separate tab) and then reroute the audio. My next step is to see if I can get Etherpad's new appendText API call working. Then I can hook up with that and dump a live transcript into Etherpads. I also want to experiment with semi-automatically identifying speakers and correcting misrecognized words.

sachac, to random
@sachac@emacs.ch avatar
sachac, to emacs
@sachac@emacs.ch avatar

Getting live into with Deepgram's streaming API
https://sachachua.com/blog/2023/12/live-speech-with-deepgram/ ( @jotaemei @hajovonta Here's a little step in the journey towards automatic live captions and other fun things)

sachac, to random
@sachac@emacs.ch avatar

One step closer to figuring out live autocaptions that might be semi-tweakable! I adapted some code from https://github.com/deepgram/streaming-test-suite/ to let me also automatically save the JSON and text for further processing. I think I'll be able to use start-process in Emacs to get that to listen to my audio and put the text in a buffer, so we can get live notes during streaming or braindumping. If I can use Alsa to pipe audio into the process, I might be able to rig it up to send lines to an IRC channel using ERC or overwrite a text overlay that OBS uses, so a future EmacsConf might even have auto captions for live talks. Bonus points if I can someday figure out how to correct misrecognized words on the fly, either by pattern-matching on common errors or having a quick way I can replace a word or two...

sachac, to emacs
@sachac@emacs.ch avatar

Today I used Lisp to parse Deepgram's recognition JSON output with utterances, punctuation, and smart format turned on and the Large model selected. I turned the words array into a VTT subtitle file with speaker identification (handy for EmacsConf Q&A) and captions limited to roughly 45 characters with punctuation preferred for splitting. It's way faster than waiting for a CPU-only computer to run Whisper Large on the files. Looking forward to experimenting with this for my personal braindumping too.

sachac, to random
@sachac@emacs.ch avatar

I received a Google Open Source Peer Bonus award, which was a pleasant surprise. :) Thinking of ways to reinvest it into Emacs and the community to see what a little money earmarked for that could do. People have already donated enough to to cover hosting costs, so that's all sorted out. Might see how USD ~250 could be used to help me make more blog posts and videos. Could start with experiments with speech recognition or NLP/AI for outlining/summarizing/cleaning up my audio braindumps. Paying for cloud usage will let me do tiny experiments without upgrading my X230T for now. We'll see!

sachac, to random
@sachac@emacs.ch avatar

OrgMeetup hosted by @yantar92 happening now https://bbb.emacsverse.org/b/iho-h7r-qg8-led <2023-12-13 Wed 19:00-21:00 @+03,Europe/Istanbul>

From https://emacs.ch/@yantar92/111549860162341841 - agenda:

  • Give advice to new users
  • Showcase Org configs or workflows
  • Demo/discuss interesting packages
  • Troubleshoot each-other's issues
  • Discuss "Org mode" section of Emacs news (https://sachachua.com/blog/)
  • Discuss anything else Org-related

Everyone is free to join the discussion/chat or lurk around silently,
listening. No recording by default.

sachac, to random
@sachac@emacs.ch avatar

Four things I'm focusing on learning more about this month:

  • Enjoying winter with the kiddo: mostly a matter of going out there and doing it, filling in any gaps along the way. Most of her friends have shifted inside, so it's up to me to figure out how to make the outside fun. Might be nice to keep track of time outside and successful reasons to get out: playgrounds, skating, treats, and the occasional playdate. Time analysis can also help me keep the big picture in mind so I don't stress about how long it sometimes takes to get out of the house. Gotta keep things pleasant!
  • Braindumping: I've been recording more stuff using a lapel mic and my phone to take advantage of solo time (usually waiting or doing chores). Looking forward to experimenting with speech recognition and LLM options, making up my own command language, and tinkering with workflows to turn braindumps into posts and maybe even videos.
  • Helping the kiddo develop Grade 2 reading and writing skills: Could be fun doing things that don't scale, like modeling how to make connections by reading together and adding stuff to commonplace journals / Zettelkasten; helping her with writing by mindmapping and using follow-up questions; and modeling taking notes and working with mindmaps to help with summaries.
  • Being together: This is the time to get even better at appreciating who A+ is as a person and this opportunity to be with her and W-. It's also a good time to get better at creative play and at helping out around the house. Journal entries will help me see progress, I think.

There's time for the important stuff. Other things will fit around these.

sachac, to random
@sachac@emacs.ch avatar

All right, videos posted, pads and IRC logs copied, update sent to emacsconf-discuss, thank-you notes sent! Now I have a little time to make some progress on non- parts of my todo list. I'll eventually circle back and do captions for the live/late talks and maybe indices for the Q&A. Plenty of things I need to catch up on, though! :)

sachac, to random
@sachac@emacs.ch avatar
sachac, to emacs
@sachac@emacs.ch avatar

Today I practised setting conditional breakpoints with edebug using "x". Getting a little better at day by day!

sachac, to random
@sachac@emacs.ch avatar

Okay, I think the YouTube channel should have all the talks and current subtitles. https://www.youtube.com/@EmacsConf Next time I can fiddle with things, I'll work on getting the PeerTube channel sorted out, and then the Q&A videos too.

sachac, to random
@sachac@emacs.ch avatar

Mwahahaha, I've figured out how to use url-http-oauth to talk to the Youtube Data API from Emacs Lisp so I can update #EmacsConf video titles and descriptions. I can even use plz by passing the bearer token in as a header. Tomorrow I'll figure out how to copy all the video IDs into my Org properties, upload any missing videos, and set all the captions we have so far. At some point, I'll add the code to our repo and write a blog post. Whee!

sachac, to random
@sachac@emacs.ch avatar

Links to BigBlueButton recordings are now available from https://emacsconf.org/2023/talks/ . I'll eventually work on figuring out combining them into webms so people can watch them more easily instead of using the web player, and then I can figure out minor edits.

sachac, to random
@sachac@emacs.ch avatar
sachac, to random
@sachac@emacs.ch avatar

was tons of fun! :) The prerecorded videos are up at https://emacsconf.org/2023/talks and I'll work on extracting the live talks and Q&A this month.

sachac, to random
@sachac@emacs.ch avatar

@abcdw Just a friendly nudge to check into EmacsConf! :)

sachac, to random
@sachac@emacs.ch avatar

Okay, all the prerecorded talks have edited captions and I've adjusted timestamps as needed, yay! 27 talk(s), 542 minutes: adventure (05:58), matplotllm (09:34), teaching (19:27), table (15:51), llm (20:26), one (22:18), writing (08:53), overlay (20:57), nabokov (09:51), eval (09:35), collab (19:16), solo (14:36), doc (42:45), ref (15:04), emacsconf (15:05), scheme (21:01), world (20:31), cubing (13:35), emacsen (18:28), emms (38:38), hyperdrive (40:03), steno (25:03), lspocaml (16:04), mentor (10:44), test (26:55), web (31:33), sharing (16:34).

Probably going to be live: 14 talk(s), 290 minutes: uni, voice, repl, unentangling, devel, core, hyperamp, koutline, parallel, eat, poltys, flat, gc, windows

sachac, to random
@sachac@emacs.ch avatar

I'm updating our opening remarks in which we help people learn how to find their way around the conference. I have tracks, watching, Q&A, Etherpad, IRC, captions, notes, general feedback, status, guidelines for conduct, updates and videos, hosts, and thanks (which will actually be in closing remarks). I need to add a tip on checking out the talk page for additional resources. Anything else?

sachac, to random
@sachac@emacs.ch avatar
sachac, to random
@sachac@emacs.ch avatar

Along the lines of making things simpler and more manageable, I think we'll skip restreaming to YouTube and Toobnix this year. Maybe that'll leave enough CPU and memory to make the 480p restream more reliable, and it'll be two fewer things for me to juggle during the conference.

sachac, to random
@sachac@emacs.ch avatar

The hosts and check-in volunteer might only have time for the actual conference itself, so I updated my Emacs Lisp code for generating Etherpad checklists with times, talk details, links, and backup plans. I hope they'll give us a step-by-step guide for whatever manual steps we need to do while my crontab scripts and org-after-todo-state-change-hook functions handle with the automatic parts. If things go to heck (missing speakers, script hiccups, other emergency, etc.), I'm relying on y'all's patience. Backup plan is let's just have a big virtual meetup, yeah? :)

sachac, to random
@sachac@emacs.ch avatar

Yay, all the talks uploaded so far either have captions or volunteers are working on captions for them. That's 4 in progress (127 minutes total) and 20 finished (365 minutes). One week to go. I can use the next few days to test the infrastructure and improve the process documentation (especially those just-in-case scenarios!) before I might need to do any last-minute captioning. Mildly stressed, but I know we'll all manage to figure things out!

sachac, to random
@sachac@emacs.ch avatar

The aeneas forced alignment tool performed poorly on one of the videos, so I manually fixed the timestamps by using subed-move-subtitles and subed-waveform-minor-mode. Slowly getting the hang of this!

sachac, to random
@sachac@emacs.ch avatar

I seem to have fallen into the enhanced CAPTCHA bucket that asks me to click on sooo many images of bicycles and motorcycles and traffic lights, probably because I misclicked a couple of things before (trackpoint! buttons!). I'm getting grumpier and grumpier about it. It was somewhat tolerable in the early days of asking humans for help with image classification or text recognition, but with all the corporate froth around self-driving cars or generative AI images... Meh.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • InstantRegret
  • mdbf
  • ngwrru68w68
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • osvaldo12
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • JUstTest
  • tacticalgear
  • ethstaker
  • provamag3
  • cisconetworking
  • tester
  • GTA5RPClips
  • cubers
  • everett
  • modclub
  • megavids
  • normalnudes
  • Leos
  • lostlight
  • All magazines