I have a Dreametech L10s Ultra vacuum that HA recognizes via the Xiaomi Miot Auto integration. I’m trying to add a custom:xiaomi-vacuum-map-card to a dashboard, and the vacuum is recognized but the camera (which I guess is the map) isn’t working due to “Invalid calbiration”. But the calibration is whatever was...
I had originally tried this HACS integration when I was first setting up my vacuum and it failed. The reason turned out to be a subnet issue, but by the time I figured that out, I had forgotten about the Dreame Vacuum integration.
Do you know if the Dreame Vacuum integration supports the video camera on the vacuum? Or is there another way to view the vacuum’s camera in HA?
My goal is to be able to sync podcast episodes (the actual audio files) and their play state (played or unplayed, how many minutes I’ve already listened to) between devices, so I can stop listening to an episode on my phone, for example, and continue listening to the same episode on my desktop computer (continuing from the...
I should add that I’m not sold on AntennaPod, Podfetch, and GPodder. I think AntennaPod is a great app and I hope I can use it to do what I want here. Podfetch seems nice, with room to grow in terms of features and Ux. GPodder seems pretty terrible (though I hardly know it) but also seems to be the defacto standard in syncing podcasts and play-states (or perhaps the only game in town?).
But I’d ditch any or all of them if I was able to sync podcasts and play-states between devices. My only caveat is that the solution needs to be FOSS and self-host-able.
The only way I see to sync play-state is if you use the ABS app or the web page. In ABS you can create an RSS feed for a podcast and you can subscribe to that feed in Antennapod, and the podcasts sync but their play-state doesn’t. So I’ll use the ABS on my phone instead of Antennapod. ABS is missing some nice features common in good podcast players, but it works well enough for me.
I have some Atom Echos installed as HA remote voice assistants. They’re very cool, but they seem to say “I’m sorry I didn’t understand that” a bit too often when I’m not addressing them....
I enabled the assist_pipeline and retrieved and listened to the audio files from my Echos, but when I tried to look at the esphome/m5stack-atom-echo-wake-word.yaml file to edit the values for noise_suppression_level, auto_gain, or volume_multiplier, the file doesn’t exist. I do have an esphome directory and it contains m5stack-atom-echo-xxxxxx.yaml files for each of my echos, but inside those .yaml files there is no voice_assistant section.
Can you please paste the contents of your m5stack-atom-echo-wake-word.yaml file (obfuscating anything private, of course)? I’ll try manually creating this file to see if the Echos recognize it.
I got some Atom Echos, configured them, and they work! I even customized my own wakeword and it worked on the first try. Thanks, Home Assistant team, for such an awesome product as Home Assistant and for fantastic documentation....
Good questions. I haven’t talked to the assistant through the browser or phone yet -that’s a good way to help narrow down what process might be causing delay.
I’m running HAOS in proxmox on a mini PC with a celeron. A couple people have said they’re using beefy hardware, so I might need a new box.
I don’t yet know the range of these Echoes, but they seem to do a great job listening. They also have a speaker but it sounds super wuiet, not really useful. If I want a verbal response I’ll have to push it through other speakers via an automation.
I changed the VM’s CPU type in Proxmox and gave the VM more resources (most of the hosts’s RAM and CPU cores) and the delays cut in half to around 16 seconds. So I know what’s causing my delay (or probably most of it). I guess I need a beefier box.
I had tried backup.sh thinking HAOS might map the config dir to /, and I obviously tried /root/config/backup.sh, but I didn’t try the middle ground. :)
So now I have a probably-related question: the script runs, but it won’t authenticate with my gitea repo:
<span style="color:#323232;">stderr: "Host key verification failed.rnfatal: Could not read from remote repository.nnPlease make sure you have the correct access rightsnand the repository exists."
</span><span style="color:#323232;">returncode: 128
</span>
Again, when I run the script while ssh-ed into HAOS, it works fine. So I suspect that when HA runs the shell script (e.g., via Developer Tools or an Automation), it’s doing it as a different user, or perhaps from a different container from which I haven’t yet copied the pubkey into gitea. What do you think?
create a separate key and provide that public key to keep it separate from your user account.
I agree this is a better method.
I’m having trouble figuring out which user or container HA is using to execute the shell command. I docker exec -it homeassistant /bin/bash, ran ssh-keygen, and copied the pubkey into gitea, but it had no effect. I tried to run ssh-keygen in the hassio-cli container but ssh-keygen isn’t installed (so my assumption is that this isn’t a container that would do something that might need a key, because HA didn’t pre-load ssh-keygen - maybe I’m wrong). When I docker inspect the HA containers and grep for “User” or “UID”, there is no result.
Howdy. I have HAOS running in a Virtualbox VM on a computer on my private subnet (let’s call it the .150 subnet). All my IoT devices are on my .151 subnet. HA can see most of my IoT devices because I’m not currently isolating the subnets, but my vacuum is defying discovery because of UDP crossing the subnets. I’m sure...
I’m planning to do that, but the host must stay on the private subnet so I’ll need new hardware. This is probabliest the easiest/best approach, but costliest. Thanks.
This is interesting. Do you mean that I should configure the private subnet itself to also contain addresses in the iot subnet? What would that mean for “isolating” the subnets? Would it be possible for iot devices to not see private devices?
Also, can you please explain what virtualbox’s host network is? What does it do?
How to configure Dreametech vacuum in Home Asisstant? (lemmy.d.thewooskeys.com)
I have a Dreametech L10s Ultra vacuum that HA recognizes via the Xiaomi Miot Auto integration. I’m trying to add a custom:xiaomi-vacuum-map-card to a dashboard, and the vacuum is recognized but the camera (which I guess is the map) isn’t working due to “Invalid calbiration”. But the calibration is whatever was...
What are the differences between conversation, intents, intent_script, and responses?
I’m confused by the different elements of HA’s voice assistant sentences....
Seeking assistance getting AntennaPod, Podfetch, and GPodder to work together.
My goal is to be able to sync podcast episodes (the actual audio files) and their play state (played or unplayed, how many minutes I’ve already listened to) between devices, so I can stop listening to an episode on my phone, for example, and continue listening to the same episode on my desktop computer (continuing from the...
How do you change sensitivity of Atom Echos?
I have some Atom Echos installed as HA remote voice assistants. They’re very cool, but they seem to say “I’m sorry I didn’t understand that” a bit too often when I’m not addressing them....
What's your experience with the new openwakeword?
I got some Atom Echos, configured them, and they work! I even customized my own wakeword and it worked on the first try. Thanks, Home Assistant team, for such an awesome product as Home Assistant and for fantastic documentation....
Seeking assistance with running shell script from HA
Howdy. I have a bash called backup.sh script in /config and I’ve added the shell_command to configuration.yaml:...
Seeking assistance regarding IP address
Howdy. I have HAOS running in a Virtualbox VM on a computer on my private subnet (let’s call it the .150 subnet). All my IoT devices are on my .151 subnet. HA can see most of my IoT devices because I’m not currently isolating the subnets, but my vacuum is defying discovery because of UDP crossing the subnets. I’m sure...