Tuesday, December 31, 2013

Saying Goodbye to 2013 and 32-bit Windows

2013 is almost over and it really wasn't a very prolific year for me as far as HA was concerned. Work has been overwhelming this year and last, but I got some sporadic cycles to do some HA. I am, however, finishing the year with a flourish as I have some time off.

Just before the holidays, I picked up a new Haswell based i3 server to eventually replace our aging HA server. I installed Windows 7 x64 on it, but moving forward from 32-bit XP means leaving things behind due to compatibility. It's something I've dreaded having to do, but it's also a chance to rid myself of some of these Windows dependencies, as much as possible. The WinEVM software used to program our JDS TimeCommander+ (TC+) will get left behind, and starCOMUltra (SCU), which is THE heart of our HA system is based, won't be coming along either. I don't want the hassle of running XP in a VM just for a couple programs. While I've got an XP based netbook to run WinEVM and program the TC+, there's nearly 15 years of HA scripting tied into SCU that has to be migrated.

I have my own app, xPL SCU, which is a scripting engine bridging SCU and xAP/xPL where about 1/4 of my HA code resides and where I'll be moving most of the nearly 10,000 lines of code I've written in SCU. I already have variations of xPL SCU running on x64 in the form of my other scripting apps, xScript and xPLScript (basically the scripting engine minus the SCU piece). In order to leave SCU, I'll need to hopefully code my own interface to the TimeCommander+, and when that happens, I'll remove those references from xPL SCU and everything should just go. We'll see what roadblocks I bump into.

In the meantime, since I built all the interfaces to other hardware into the SCU script, I have to rebuild that somewhere else. As I mentioned in my previous post, I moved ZWave and 1Wire to a BeagleBone Black, and I'm continuing in that direction. I'm putting all the serial devices on the BBB and will use xPL to communicate with the HA server - to keep their interfacing code independent of whatever HA software I use now or in the future. I already prototyped this path with the ZWave interface, and since I can run the same Python code on the Windows server and the BBB (just changing the serial port reference), I can debug it without moving the hardware to the BBB.

As a result, I've made some good progress in a short time. Three days ago, I wrote a Python script to interface xPL and the RFID reader, put that on the BBB, and moved the related logic from SCU to xPL SCU. The next day, I did the same with UPB. Yesterday, I started doing this for the WGL 800. I have so much stuff triggering off of X10 wireless (DS10As, MS10As, etc.), debugging the migrated code is taking a while. In addition, there's a lot of cleanup to do. Lots to do to keep me busy well into 2014.

Happy New Year.



Tuesday, November 5, 2013

Time Flies

Has it really been almost four months since my last post? Things have been incredibly busy at work, and I haven't had much time to blog since summer. We had a major liquidity event recently, and the build up to that and the aftermath, has been an increased focus on the product I'm working on. Needless to say, the stupid crazy work schedule for a grossly understaffed project has been ramped up again. While it's great to have a cash reward for 2 1/2 years of manic, startup work, what I really want is time off.

I have been able to do some things here and there. One much needed housekeeping chore I did was cleaning up a bunch of wiring around the house and garage and putting all the exterior cameras and motions on a single 12V power supply (so they can easily be put on a UPS). Another project I've been working on is gradually cleaning up the jobs that run on my HA server, mainly in preparation to build a new one. It's a 6.5 year old Core2 Duo system running XP. Eventually I want to put in a new i3 or i5 server.

As part of that cleanup, I started testing putting some hardware on one of the BeagleBone Blacks I bought. I put the Aeon Labs ZStick on the BBB, installed OpenZWave and added xPL functionality to their Python sample code. After some testing, I moved ZWave permanently over from the HA server to the BBB and turned off my .NET app. Another thing I moved was 1Wire. I installed DigiTemp on the BBB to read the DS9097U 1Wire network and wrote more Python to dump the temperature data to mySQL. With that, I turned off another of my .NET apps.

Is there a pattern here? Possibly. I have gotten tired of .NET apps running on XP but not on Windows 7 or breaking after a Windows update. The ever growing, bloated Visual Studio tools are another negative. I don't think I can get away completely from .NET, but I can reduce some of the dependencies I have by coding in Python. Of course, coding in Python means a certain Premise user on CocoonTech (who doesn't understand blogs, thinks they're hard to read and should be organized by topic instead of date) will have an easy time stripping the headers and credits from my code and passing it off as his own.

I also funded this Arduino project on indiegogo getting 5 boards, 2 for the kids and 3 for future projects. Haven't had much time to work on them, other than powering up and downloading sample code to make sure they work. Recently, I've been searching EBay for various sensors for these boards to play around with. Borderless Electronics, has now come back with a follow up campaign, featuring various combinations of a kit, board and shield. I'm also contributing, to get some shields for my Arduinos, so they can be network enabled.

While on the topic of crowdfunded projects, I also backed the Pressy project. This is going to be a great way to kick things off on my Note II, like triggering my voice recognition home automation script.



Friday, July 12, 2013

What I Automate

This is something I've been meaning to write down, but never got around to. Let's see what I can remember:

Lighting:
  • Usual lighting control - control from switch, keypads, PCs, mobile devices
  • Night lighting - all rooms except the bedrooms have motion activated night lighting, which is overridden by light sensors
  • Back lighting - turning on backyard lighting when the back door deadbolt is unlocked and turning off when it's locked
  • Protocols - X10, UPB and ZWave

    Notification methods:
  • Whole house text-to-speech (TTS) announcements - for announcing weather, reminders, warnings, etc during normal waking hours
  • Whole house sound effects - various different notifications sounds for mailbox being opened, doorbell ring (changes with the season), motion on the porch, etc.
  • Zoned TTS announcements - Insignia Infocasts and SqueezeBoxes in bedrooms and living areas for targeted notifications
  • On screen displays (OSD) - Infocasts, PCs and TVs also display announcements in case they can't be heard (watching a movie, blasting the stereo)
  • IM - notifications are sent via IM when away from home and Internet connected
  • Email - when attachments are part of the notification
  • SMS - Google Voice sent SMS for when we're out and not Internet connected
  • multiple - depending on the severity of the notification, multiple methods may be combined. OSD & TTS typically happen for most notifications when someone's home

    Environment:
  • HVAC - disable HVAC when alarm armed in away mode or when a window is opened, warn if someone's trying to turn it on when a window is left open via TTS, OSD, IM
  • Fans - 1Wire temperature sensors in each bedroom control when to turn off any fans running on summer nights
  • Exhaust fans - turn on/off laundry room & bathroom fan based on humidity, guestbath fan based on duration of occupancy ;)
  • Leaks - wetness sensors under sinks & in bathrooms and alerts sent via all notification methods

    Info/Status notifications:
  • Temperature & weather for the day - TTS announcement in the morning as we come downstairs
  • Google calendar appointments for the day - same as above. as calendar events hit their reminder time, notifications are sent via IM and OSD.
  • Phone calls while we're out are announced as we enter the family room
  • Mailbox - OSD & sound on event. TTS reminder later to check it if we haven't gotten the mail.
  • Doorbell - OSD & sound effect on event. TTS to tell us the doorbell rang while we were out
  • Temperature - sound effect when it is cool enough to open the windows on days over 80°F
  • Washer finished - TTS, OSD, IM
  • Garage door left open with no activity for a period of time - TTS, OSD, IM, SMS
  • Front door left unlocked for a period of time - TTS, OSD, IM, SMS

    Telephony:
  • Google Voice forwarding - when arriving home, Google Voice forwarding for the home phone is enabled, when leaving it's disabled. arriving & leaving work also changes GV forwarding to my work phone.
  • Caller ID - incoming calls are checked against all our Google contacts, and if it's a match, the person's name (number if no match) is displayed & announced via TTS, OSD and IM.
  • Intercom - wired phones and softphones for calling specific rooms from other house/softphones
  • Mute/Pause AV - when the main phone is picked up.

    AV:
  • TV - automatically turn on when DVD player is turned on (but not off since we may watch something else after). game consoles are connected to one tv that is turned on & off when the consoles are.
  • Whole house audio - via Squeezeboxes
  • Uptime - all AV equipment in the house are monitored by current sensors so their actual power on status is known. uptime, last on time, and last off time are tracked.

    Dog:
  • Feeding - automated feeding 1 hour after our morning run, when I've already left for work. when the feeder is used, I get an SMS confirmation and a picture of the happy eater
  • Outdoor bark deterrent - when barking is detected by the side gate, verbal corrections are played by the garage speaker and a stream of water is shot at the gate area.

    Nanny:
  • Kids' laptops - when on, the kids are reminded every 20 minutes to get up walk around and look away from the screen
  • Game consoles - when on, the same 20 minute reminder is announced (and the TV is muted to reinforce the message)

    Cars:
  • RFID - tracking home & away status of cars - TTS, OSD, IM announcement when a vehicle arrives home

    Irrigation:
  • Run time - adjusted based on rain, weather forecast & past temperatures

    Security:
  • CCTV + DVR - typical motion based recording of cameras
  • Away notifcation - Email snapshots of particular events (doorbell, mailbox, porch activity, etc) when we're out
  • Triggered - all sorts of notifications

    Power monitoring:
  • Real-time - whole house power monitoring split into 7 zones
  • Tracking - local via rrdtool & canvas.js, cloud via automatic updating to Google Drive spreadsheet
  • Oven - warning via TTS, SMS if arming house in away mode and the oven is on
  • Solar production comparables - in progress, comparing current production of array to past to determine if cleaning is necessary

    Security:
  • Locks - deadbolt state monitoring
  • Reminder - warning if we unlock a door and the alarm is still armed in home mode (so we don't accidentally set off the alarm and wake up the neighborhood)
  • Away lighting - replay lighting events from some randomly chosen log file
  • AV - turn off AV equipment when armed in away mode

    Computers:
  • Backups - typical automated local and remote backups
  • Monitoring - cpu utilization & temperature, hard drive utilization & temperature, uptime, low disk & memory notifications, automatic killing of runaway processes on servers
  • Internet - bandwidth & connection monitoring, auto power cycle modem & router if connectivity is lost
  • Power - automatic hibernation of certain powered on machines when alarm is armed in away
  • Wake from hibernation - automatic waking of my laptop from hibernation upon first motion in the master bathroom in the morning or arrival home of my cellphone
  • Email - TTS, OSD notification of new emails
  • Craigslist bot - periodically scans CL for things I'm looking for (found a $10 and $20 Squeezebox this way!)

    Occupancy/Presence:
  • Motion sensors - some hardwired, many wireless all over the house & around the house
  • Audio - sound detectors upstairs and in the living room
  • Video - one camera connected to a hacked Seagate Dockstar is being used for a specific motion detection purpose
  • Cellphone - via bluetooth, mentioned above
  • Car - via RFID, mentioned above

    Control methods:
  • Android speech recognition - using our phones & tablets (and SL4A+Python) to voice control lighting, AV equipment and various appliances, query status of devices and control audio players and music selection
  • IM - using the same language parsing as the speech recognition, everything that's voice controllable can be controlled via IM
  • AJAX Floorplan GUI - control & view status of light/appliances/AV/HVAC from cellphones, tablets, PCs - basically anything with a modern browser. see status of doors/windows/locks/temperature
  • Insignia Infocast (Chumby) - used as touchscreen control panels with our Panel Builder app and "offline firmware" (since Chumby service is essentially dead)
  • Wireless keypads - X10RF & ZWave based keypads
  • IR - all Squeezeboxes are IR receivers and can broadcast the IR codes they receive via the xPL plugin

    2nd Home:
  • Minimal automation - lighting, occupancy, cellphone presence, leak sensors
  • Remote link - all status is updated to the main home via IM and lighting can be controlled over IM

    Work:
  • Cellphone tracking - running on my office laptop, IMs my presence to the HA server when my cellphone is around, to adjust Google Voice forwarding

    Old stuff (no longer in use):
  • Baby monitor - broadcast through whole house speaker system and local Shoutcast channel (for listening on a PocketPC PDA when I was out in the yard - yes it's been a long time since the kids were babies), automatically turns on when drop side crib (now banned) door raised and off when someone enters the room (and back on after a period of inactivity if the crib door is still raised)

    This is just a quick list. I'm sure I'll add to it when I have more time... I need to annotate this with some pics and links too (in progress)
    This is a screen cap of the main screen for our Insignia Infocasts (aka Chumby) that we have around the house


  • Saturday, June 22, 2013

    Washed the Solar Panels

    I've been meaning to wash the panels for a while now, but a Google study I read said the gains were minimal. They've been installed for over three years and there's a good layer of dust and some bird poop on them. I'm sure if I cleaned them, I'd get some increased production. In June three years ago, we would occasionally get over 25 kWH per day. Now, we are seeing just over 22 kWH. That's a pretty bad drop, almost 12%! Panel output naturally degrades over its lifetime, but this exceeds what I should see (The panels are guaranteed to produce at least 90% of their wattage for 10 years and at least 80% for the 15 years after). I've been wasting money all this time! I finally got around to getting telescoping a car wash brush:


    The brush extends to 75", but since the panels are on a 2nd story roof, I can only reach the first 8 panels without creeping near the edge of the roof. I ended up zip tying the brush to an extension pole we used for painting, and I was easily able to reach to the end of the 2nd row of panels. At 6AM this morning, I climbed up on the roof with the brush, the pole and the hose and washed away. In 15 minutes, I was done as well as I could be. The brush is very soft and there still seemed to be some dirt I wasn't able to scrub away, but it is a lot cleaner. Today will be cloudless and mild, like yesterday and the day before, so I'll be able to make a good comparison on the effects of the cleaning. So far, at 9AM, the panels are producing 2245 W and total power produced is 3.033 kWH. Compared to yesterday at 9AM, the panels were delivering 1965 W and had produced 2.574 kWH. Instantaneous output has gone up 14% and total production is up 17%. Pretty good! We'll see where the day's production ends up, and I'll post updates for below. It certainly looks like I'll be making periodic trips onto the roof to wash the panels.

    TimeToday
    Watts
    Yesterday
    Watts
    % ChangeToday
    kWH
    Yesterday
    kWH
    % Change
    9AM22451965+14.2%3.0332.574+17.8%
    1PM29732726+9.1%14.26412.861+10.9%
    4PM19131673+14.3%21.92419.826+10.6%


    Update: At the end of the day, 24.5 kWH of electricity were produced versus 21.99 kWH yesterday, an 11.4% increase. That's pretty significant. It's as if I added 2 more solar panels to the 16 we already have. The question is, when should I get back up there and clean them again? One interesting bit from the above table is that dirty panels are actually less productive in the morning and evening. In other words, the grime on the panels obstructs more sunlight when the sun is not directly above them. I'm sure there's some high school physics explanation behind that, but I'll let someone else figure it out :)

    Monday, June 17, 2013

    Major Milestones Along The Way



    I was cleaning up some stuff in my office and came across the invoice for my JDS TimeCommanderPlus (TC+), and it made me think about how our HA system got where it is today. September 1995 - that's when the madness began. Although I was playing with some wireless X10 stuff a little earlier, the real automation started with the TC+. To this day, it's still chugging away, handling X10 and IR (with an IR-Xpander2), along with some digital inputs and a few relays.It was great, for a time. The TC+ native software package, WinEVM, was practically a Windows 3 program and never evolved beyond that. Not being a software guy, I was content to use the WinEVM point-and-clunk interface to write code (later dubbed "starglish" in reference to the follow on JDS Stargate) to download to the TC+. It had hooks to allow it to play sounds files on a serial connected host PC plus the ability to issue shell commands. I created a bunch of batch files to take advantage of that, but not much else.

    A major leap forward came when I started beta testing a software package called starCOMPlus in 2001 (or was it 2000? I can't remember). starCOMPlus exposed all the devices in the TC+ through DCOM, allowing users to create standalone apps that can control and query all aspects of the TC+. It also gave the ability to create web interfaces, and I learned how to build ASP pages to control X10, relays and IR and read the digital and analog inputs. starCOMPlus also introduced what the developer called a "hosted" script, which also had access to the TC+ devices and would run while the app ran. It allowed offloading and expanding of functionality from the TC+ to the host computer using a "real" language (jScript or VBScript - I chose jScript). From there, the number things I could automate and interface to exploded.

    In 2003, I stumbled upon xPLRioNet, an alternate server for the Rio Receiver (RR). The RR was one of the first networked MP3 players, and I had picked up a couple being liquidated around 2001. xPLRioNet (later called MediaNet), was the best of a few alternate RR servers. It featured this neat thing called xPL, that allowed status and control messages to be passed around my home LAN. As I found out, there were many xPL apps that expanded HA beyond what I had known. I could now have distributed nodes tied together by xPL. I was hooked.

    Four years later, I figured out how to write an AJAX app, and put together the pieces of what became our floorplan GUI. During that process, I taught myself PHP, JavaScript, DOM manipulation and especially mySQL, which has become an integral part of our HA system. With xPL providing the first piece of a distributed HA system, mySQL became the persistent state of the system, accessible and changeable by any of the nodes. Now, I was no longer tied to having Windows nodes.

    In early 2008, I started learning to write my first .NET app. I had been using a copy of HAL Deluxe that I had found on liquidation for about $10, but the user interface was a piece of crap. It was another point-and-clunker, so my first app exposed HAL's devices to a scripting engine (much like what starCOMPlus did for the TC+) and tied in xPL. While I was writing that app, I was also using that knowledge to write another app linking starCOMUltra (the sequel to starCOMPlus) to xPL & xAP, and building in another scripting engine. To that point, I had been using other apps to script xPL interactions: xPLHAL and xAP Floorplan. Neither provided the free-from scripting that starCOM* did and I craved, so I wrote my own. Once I got my feet wet in .NET, I was churning out applications like crazy & the functionality of our system took another exponential jump.

    It has been been nearly non-stop HA since then (as you can see from all the blog entries), except for the work induced hiatus last year. As always, I'm on the lookout for new ideas to implement (although I'm notoriously frugal!).

    Sunday, June 9, 2013

    Selecting Music to Play on a Squeezebox Using Android Speech Recognition

    This is a short clip showing two examples of using Android speech recognition to select music to play on a Squeezebox (actually SliMP3). It shows how the server side maintains context of the current operation. I ask it to play 30 Seconds to Mars, and the server queries the Logitech Music Server database for their albums. It returns them and my phone asks which album I want to listen to. I reply "This is War" and then am asked where I want the music played ("What zone?"), to which I answer guestroom. The server then queues up the album and launches it in the guestroom, turning on the SliMP3 and the powered speakers. In the second example, I tell it a particular album (Ride the Lightning) and location in one sentence. The requested album is launched in the guestroom.

    On a related note, here's sample SL4A code that passes recognized speech to an IM address and speaks the responses received. It's the basis of what runs on my phone.


    Saturday, June 8, 2013

    New Toy: BeagleBone Black

    It's been a while since I got a new toy and I've been eyeing a Raspberry Pi, for no other reason than everyone seems to have one. A few months ago, I read about the BeagleBone Black (BBB), a revised version of the BeagleBone with more power and a much cheaper price of $45, barely more than a Pi. Sold! I just had to find a place that had them in stock. I eventually found a site, Special Computing, that had the BBBs for $43 each (has since gone back to MSRP) with just $3 first class USPS shipping for 2 (I always tend to buy these type of gadgets in pairs for some reason - Quatech serial servers, Rio Receivers, 3Com Audreys, Insignia Infocasts, Seagate Dockstars...) I ordered them Sunday night, they shipped out Monday from Arizona and arrived at my office in Silicon Valley on Wednesday. It came too fast! I usually have time to do a bit of research to plot out what I'm going to do with my new toy before it arrives.

    Last night, I finally had some spare cycles and got to work. I knew I didn't want to use the Angstrom Linux that comes pre-installed on the BBB - I wanted something with access to the most recent Linux packages. I figured I'd go for Ubuntu. I wanted to see how a desktop would run on it anyway. I went with a pre-built Ubuntu 13.04 image and followed these directions to install it on an 8GB microSD card I had lying around using my wife's Linux laptop. (Yes, I have my wife, and 10 and 12 year olds, running Ubuntu on their laptops instead of Windows!) Next thing was to get the BBB booting off the uSD card. Apparently, there's button on the BBB you push to force it to boot from the uSD instead of the onboard eMMC, but that's not going to work for unattended use. Instead, I found this method to easily get the BBB to automatically boot from the uSD card: connect the BBB to your computer like a USB drive, but instead of deleting the MLO file, I just renamed it MLO.save - in case I want to boot from the eMMC in the future.

    After disconnecting the BBB from my laptop, I put the uSD in it, powered it up and booted Ubuntu from the card. Shellinabox comes up by default allowing me to log into the BBB in a browser window. SSH didn't come up so I enabled that with "sudo update-rc.d ssh defaults" and went through the process of adding a user ("sudo adduser ...") and adding that user to the sudoers file. Then I set up a light window manager ("/bin/bash /boot/uboot/tools/ubuntu/minimal_lxde_desktop.sh"), installed vnc ("sudo apt-get install vnc4server") and setup VNC to use lxde. I also wanted bluetooth support ("sudo apt-get install bluez") and needed Python pexpect for a project I want to do (Python was already installed in the Ubuntu image). Pexpect came as a .deb file so I needed to installed dpkg ("sudo apt-get install dpkg") to be able to install pexpect ("sudo dpkg -i python-pexpect_2.4-1_all.deb").

    That's how far I got last night. I was able to VNC in, see the lxde window manager come up, launch Chromium and log into GMail in slow motion. Web browsing seems a bit much for this platform. I have yet to hook it up to a monitor (I need to get a mini HDMI to HDMI cable). My immediate use will be to experiment with the TI SensorTags. The tags use bluetooth low energy (BLE), which is built into Linux kernels 3.5 and higher.


    Tuesday, May 28, 2013

    Speech Recognition of Numbers for Timed Events

    One of the knocks against using speech recognition for home automation is that you could have used a different method, like a touchscreen or remote, for more efficient control. However, timed events are one of those use cases that I find better with voice control. Let's say I want to turn on a light for 20 minutes. With a GUI, I have to select the light and then select a mode (delayed - turn on after X seconds, interval - turn on for X minutes then turn off). To select the delay/duration, I would need some sliders, text boxes to type the time or maybe a drop down list of predefined times. If I want to schedule with days or dates, then I would need a date picker/calendar as well. Or, I could just say "At 6PM on Sunday, turn on the porch light for 15 minutes."

    Continuing with my recent experiments with Android speech recognition, I began adding timed events. One advantage of a free form speech recognition engine like Google's, is the ability to recognize any number that's spoken. You're not limited to a set of predefined options, like with Homeseer:
    <1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|30|40|50|60|70|80|90> <seconds|second|minutes|minute|hours|hour|days|day>
    
    Kind of a nitpick, but what if you want 25 minutes? Can't do it, but Google will recognize whatever you say, whether it's 3 fortnights or 2,567 seconds. It may not be necessary, but it gives you the flexibility to do whatever you want. It's up to your software to parse out the numbers and units. With Python, it's simple to recognize the sentence for a particular pattern and extract the necessary parameters. Tthe following code shows how to extract information for basic delayed/duration type of events.
    regex_delay = re.compile('[^\s+]*(in|for|after)\s+(\d+)\s+(day|hour|minute|second)[s]*')
    if regex_delay.search(msg):
      delay_parm = re.findall(regex_delay, msg)
    
    delay_parm will now contain a list of groups. If your command is "turn off the garage light after 15 seconds", you'll get this:
    delay_parm = [('after', '15', 'second')]
    delay_parm[0][0] = after  # delay type
    delay_parm[0][1] = 15     # delay value
    delay_parm[0][2] = second # delay unit
    
    Now you have all the information you need to perform the action:
    # convert to common unit, seconds
    if delay_parm[0][2] == "day":
      delay_time = int(delay_parm[0][1]) * 60 * 60 * 24
    elif delay_parm[0][2] == "hour":
      delay_time = int(delay_parm[0][1]) * 60 * 60
    elif delay_parm[0][1] == "minute":
      delay_time = int(delay_parm[0][1]) * 60
    else:
      delay_time = int(delay_parm[0][1])
    
    if delay_parm[0] == "for":
      do_something_for(delay_time)
    else:
      do_something_delayed(delay_time)
    
    Scheduling an event based on a day ("next Tuesday"), a time ("at 3PM") or date ("December 31, 2014") is just an extension of this. Take a look at this demo where I'm showing a time based reminder and a delayed lighting event.


    Saturday, May 18, 2013

    Generic Speech Recognition Script for SL4A

    I've finally had some time to make a generic speech recognition script that hopefully any SL4A capable Android device can use. I've taken parts of my script from the previous post and added in some sample pattern matching from my server side script. The result is a script that can issue commands to your home automation controller or software by fetching URLs. A couple prerequisites: you need you must have SL4A and Python for Android installed on your device. It would help to be familiar with some Python and its regular expression syntax. The code is well commented and has samples for recognizing phrases like "turn off the kitchen light" and "turn the master lights off" - so hopefully that's enough to kickstart your automating. So go ahead and get it!

    Saturday, May 11, 2013

    More Two Way Interaction With Android Speech Recognition

    Let's start with the demo first. The video shows commands and queries being spoken and recognized by Android speech recognition. Our web GUI is also in the shot - since the wife forbids me to walk around filming the insides of our house for the world to see :) - so you can at least "see" some of the status being queried and the results of some actions. There are some annotations on the video, but you can't see them on the embedded player. Click through to YouTube to see the video with annotations.



    What makes all that stuff work is the queries are passed to a server for processing. That gives the opportunity for two way interactions where you can not only control your system but query it as well. As I mentioned in my previous post on this topic, using IM as a transport mechanism allows the recognized phrase to be sent to the server and the responses sent back to the Android device. Over on our server, EVERY device and its state is logged in our MySQL database. This was done when we built our AJAX based GUI. Also, since our system is distributed, MySQL provides a place for status to be updated and synced between various devices. Below is a snapshot of a phpMyAdmin page showing part of one of our database tables. The table contains the device name, type, its state and when it was turned on and off.



    Every device and its state is stored: every light, appliance, AV device, motion sensor, door, window, lock, car, phone, computer, etc. Whenever a device's state changes, a function gets triggered in whatever software is interfacing to that device (which is for the most part Windows scripting like the jScript below):

    function setStatusOnOff(device,type,state,secs) {
        try {
            if (state=="off") {
                mysqlrs.Open("insert into status (device,type,state,secs_off) values ('"+device+"','"+type+"','"+state+"','"+secs+"') on duplicate key update state='"+state+"', secs_off='"+secs+"'",mysql);
            } else {
                mysqlrs.Open("insert into status (device,type,state,secs) values ('"+device+"','"+type+"','"+state+"','"+secs+"') on duplicate key update state='"+state+"', secs='"+secs+"'",mysql);
            }
    ...
    }
    

    Since the device name is stored as a normal non-abbreviated name ("family room tv" instead of "frtv"), it's straightforward to use the recognized speech to search for devices using MySQL queries. The next step is to figure out what type of command is being issued. For example, a command will have the phrase "turn on" or "turn off" in it. Since I use Python on the server to process the speech, I use its regular expression (regex) functions to pattern match for commands:

    reTurn = re.compile('(^|\s+)turn.*\s+o(n|(f[f]*))($|\s+)',re.I) # recognize "turn on", "turn this and that on", "turn this and that blah blah blah off" or just "turn off", even "turn of" anywhere in a sentence
    

    After figuring out if it's a command or query, my script then strips out extraneous text to simplify extracting the device and type. What gets stripped out depends on how things are phrased in your household. Here's a snippet to do that, where msg is the recognized phrase:

    msg = re.sub('(^|\s)+(turn)|(the)|(a)|(can)|(you)|(please)|(will)\s+',' ',msg) # strip out unneeded text
    

    I'm experimenting with natural language processing to strip out unnecessary words automatically but it's not ready yet. Next, the script figures out the type of device involved. For lighting, it would use a regex similar to this:

    reLight = re.compile('\s+(light[s]*)|(lamp[s]*)|(chandalier[s]*)|(halogen[s]*)|(sconce[s]*)($|\s?)',re.I)
    

    Since all the extra words have been stripped out and the type has been determined, all that's left is to formulate a MySQL query like this to get the actual device name:

    msg = re.sub("\s+","%",msg) # replace spaces with wildcard character %
    if reLight.search(msg):
      query="select * from lighting where device like '%"+msg+"%'"
    

    This is necessary to remove ambiguities in the recognition. A light may be named "guestbath" in the HA system, but Google may pass the recognized phrase as "guest bath." With the actual device name, the final steps are to issue the command and send a response back to the Android device. As lights and other devices are added to HA system, nothing else needs to be added. Contrast that with other automation systems where you have to setup a recognition phrase for every device and possibly every state in your system. In our system, new device names will be parsed out of the database, and no changes are required on the Android device. Queries also follow a similar flow, except instead of issuing a command, a response is formulated with the status and sent back to the user.

    That's the backend. I'll cover the frontend in another post.

    Thursday, May 9, 2013

    Using CanvasJS to Graph Power Consumption

    We've been using RRDtool for graphing everything from temperatures, to disk usage, to power consumption. It's very powerful and makes some nice charts, but I can never remember how to set up the database. Plus, my server is constantly running the tool to generate the graphs every 15 minutes so it's relatively up to date when someone views them. I'm now playing with CanvasJS which uses HTML5 and JavaScript to easily generate some really cool graphs. I'm using power consumption as my test bed for implementing CanvasJS. Data for the power consumption is dumped into our MySQL database every 2 minutes (it's actually coming in every second, but I'm only sampling the data every 2 minutes for this graphing application). With some JavaScript and PHP pulling the data out of MySQL, the charts are generated on the fly. It works REALLY well and it's fast. You can pan and zoom the chart to see the exact power usage at a specific time. Check out the gallery for more samples with code. I will probably transition all our system's graphing over to CanvasJS, after I have more time to experiment. In the meantime, here's a short video showing the power consumption graphs I'm working with.



    Sunday, April 21, 2013

    Flat UI Update to GUI

    Our major HA user interfaces are web based, and when I see prepackaged UI elements, I think about how I can use them to improve what I have. We mainly use our custom floorplan GUI, but occasionally use some older interfaces. One of them is below, design circa 1999 :) As you can see, the artistic side of my brain isn't very developed.



    It's actually the 2nd generation - the first was ASP and IIS based. The 2nd version didn't change the UI at all, just the implementation, which I switched to Perl CGI and Apache. I ran across a Flat UI package some time ago. Flat, from what I've gathered, is how many of the newer, hip websites are designed. I figured I'd give it a try and bring some of its design elements into our UI. As a tutorial, I've redone the above lighting UI with toggles and sliders from Flat UI. You can see how much nicer it looks in this short clip:



    It's also more finger friendly for tablet use. I made my own tweaks to fit in my existing template, but I still need to move things around a litle (slide the toggles down a few pixels, maybe reduce the open space). It was relatively painless (I already know enough JavaScript and JQuery to figure out most of the kit), and I can see using it in other interfaces.

    Tuesday, April 16, 2013

    Using SL4A and Android Speech Recognition for Home Automation

    My latest project has been experimenting with SL4A (Scripting Language For Android) and Python on my Galaxy Note II. I started with the included saychat.py sample to build a simple script that kicks off Android speech recognition. It takes the text result returned from Google and sends it over IM to our HA server. The HA server does some basic natural language processing on the text, extracting commands and performing the operations if any valid ones are found. It then returns a response over IM to the phone with the result of the command(s). Back on the phone, the Python script has been waiting for this confirmation and uses TTS to read it back. The cycle repeats until the user says "goodbye" or it gets two consecutive recognition results with no speaker. Here's a short YouTube video of it in action:



    From the video, you can see I've tried to parse the speech so that it can find the commands and devices even if the command is spoken differently. I used three different phrases:

  • "Can you turn on the kitchen light and dining room light?"
  • "Can you turn off the lights in the kitchen?"
  • "Turn off the dining room light."

    I was trying to avoid having only simplistic commands like the last one. The first one demonstrates that ability to speak a command for multiple devices, and the ability to preface the command with "Can you" or pretty much anything like "The dog wants you to" ;) The 2nd command shows that it's not restricted to parsing "kitchen light" together. The last command is a typical HA VR command. My parser also has the ability to decode multiple commands in one, such as "Turn off the kitchen light, the guest bath fan and living room light and turn on the back floods." The only challenge is saying everything you want to say without much of a pause, otherwise recognition stops and the partial command will be sent.

    A few advantages of using this setup:

  • Google's speech recognition in the cloud is probably the best, most up-to-date system. They started building up their system with the now closed GOOG-411 service. Further fine tuning gets done on the millions of voicemails their Google Voice service transcribes. Their Chrome browser also uses their speech recognition and of course, so do the millions of Android users. All this input goes into tuning their accuracy, and what you end up with is one of the best performing, up to date speech recognition systems. If you're using Microsoft's Windows VR, you're probably getting something that gets updated every few years with each OS release - if you're upgrading. With HAL, you've getting a 1990s VR engine. I'm not even sure if that gets updated anymore.
  • Google's free form speech recognition allows the most flexibility in speaking commands. Granted, that makes the parsing more difficult, but it allows a system that can more accurately respond to the different ways different people phrase commands. Most speech recognition engines I've worked with require you to pre-program canned phrases in order to recognize commands. If you deviate just a little from what's programmed, good luck getting your command recognized.
  • By using Jabber IM as a transport mechanism for the recognized commands, the same system that works at home, works when you're away. You just turn on your mobile data - there's no VPN or SSH tunnels to set up every time you want to speak a command. There's one level of security for free since your home's IM client must have pre-approved other users to allow communication (adding them to the roster). Another level can be done at the scripting layer of your HA software, by limiting what IM users can issue certain commands. For extra security, you can even encode or encrypt the text being sent over IM if you want, but if you're using Google Talk servers, your communication is already wrapped in SSL.

    A few more details. Using SL4A, I cannot control the default speech recognition sounds - it can get annoying after a while. I'm using Nova Launcher as my launcher instead of TouchWiz. Nova Launcher let's you remap the home key behavior on the home screen. When pressed, instead of showing the zoomed out view of all my screens, it kicks off the script. Also, my HA device database is stored in mySQL, which allows for powerful searches and easy matching of what's spoken to actual devices - even when the device name isn't exactly the same as what was spoken. I've been using the mySQL setup, IM interface and command parsing for many years now, (although the parsing was more primitive) so integration was extremely simple. At some point, I would like to implement NLTK, the Natural Language ToolKit, for more complex language processing.


  • Wednesday, April 3, 2013

    2012 Most Downloaded (3 months late)

    Here is our list of popular downloads for 2012:

    1. EventGhost xPL Plugin - 99 times
    2. xScript - 77
    3. BlueTracker - 61
    4. xPLGVoice - 39
    5. xPLSerial - 17
    5. BlueTrackerScript - 17
    7. xPLGCal - 16
    8. t2mp3 - 15
    8. Blabber - 15
    8. Noise - 15
    8. xPLChumby - 15

    The EG plugin continues to be popular despite our stopping development on it years ago. Four of the top five and their downloads are almost exactly the same as last year, with xPLChumby dropping and xPLGVoice getting more interest. There are no new apps on the list since I did virtually no HA last year, but it's nice to see there's still a similar amount of interest in our existing apps. I haven't had the urge to code anything new, let alone time to brainstorm new things. We'll see what 2013 brings...at least I'm blogging again.

    Saturday, March 30, 2013

    Solar Year #3

    We just finished our 3rd year of having solar panels, where we generated nearly 75% of electricity usage and saved $1107. After the first year, we've done a good job of reducing our electricity usage by unplugging unnecessary devices and consolidating servers. However, I'm always looking to buy more gadgets :).

    yeartotal usage kWHsolar kWHgrid kWHsavings $avg monthly kWH% solar usage
    18636.355811.22825.15$1,234719.767.3%
    27640.25739.21901$1,144636.775.1%
    37410.955548.951862$1,107617.674.9%


    Total output from the panels has decreased each year, but that's been mainly due to a larger number of rainy or overcast days. I'm sure some of it is due to dust collecting on the panels. I have yet to get up on the roof to wash them off, and it's something I plan on doing this spring. The wet winter we've had this year has done a decent job of washing the panels for me. Overall, we're very pleased with our solar panels, which have already paid for 28% of the cost (slightly less than I projected due to us using less electricity).

    Thursday, March 28, 2013

    Tasker $1.99 at Google Play

    Tasker, an app for automating almost anything on your Android device, is on sale at the moment for $1.99 (regularly $6.49). I was interested in buying it a while ago, but $6.49 for an app seemed a bit high. Glad I waited. Get your copy at Google Play.

    Automated Barking Dog Correction

    Our Australian Shepherd is a wonderful dog, but he's a bit protective of our property. When he's out in the backyard, he will bark when he senses anything unfamiliar to him in front of our house. He tends to bark near the gate, so I've wired a microphone placed in a garage vent near the gate back to a server. On that server, we've been using our Noise app to detect when he's barking. It generates an xPL message when the sound levels detected exceed a specified threshold. There's a few conditions that are checked before the system decides if he's barking, like if the gate and garage door are closed and there's motion detected near the gate. The server will then play recorded MP3s of our voices telling him to be quiet through a speaker placed near the same garage vent. It worked for a while, but our dog got used to it and started ignoring it. The next step was to purchase a cheap windshield washer pump from Amazon (see below). The pump was connected to a gallon juice jug filled with water, and some plastic tubing was connected to the other end of the pump, routed through the garage vent and aimed at the gate. Finally, the pump was soldered to a 12V wall wart connected to an appliance module. Now in addition to the verbal correction, the pump gets turned on for 3 seconds, shooting a stream of water at the area behind the gate. He hates getting wet, so it's no surprise that the frequency and duration of barking has drastically declined :)

    Follow up: Barking has declined from 3-5 times per day to maybe once a week!



    Monday, March 18, 2013

    It's Alive...Alive!!! (Our Infocast/Chumby that is)

    Since the Chumby servers shutdown (and was switched to a stub service), the only available app is a clock. Some time ago, a Chumby user, Zurk, created an offline "firmware" for Chumbies to run without the Chumby servers in case they went down. He also released an app called octopus that downloaded all the apps off the Chumby servers, which was the thing to do when the servers were still up. With the downloaded apps and the "firmware" (more like some scripts and a local collection of Chumby files than actual firmware) Chumby nirvana could mostly be restored. I really only care about 1 app, Panel Builder 2, which runs on all our Infocasts. That rarely changes unless one of the kids wants to use some other app. I didn't even notice the Chumby servers went down because our Infocasts were still running the cached PB2. Slowly, one by one our Infocasts got reset to the default stub server clock. When my nightstand Infocast switched over, and I couldn't control the house from it, that was the last straw! I had to get this Zurk thing working. It was pretty much plug & play - just unzip the contents to a big enough USB disk. Panel Builder 2 was a private app for my testing only, so it couldn't have been grabbed by octopus. Since I wrote it, I just needed to stick the original .swf file on the USB drive, edit a few files to add the app and the Infocasts are useful again :) I've slowly been tweaking things and testing some other apps, but everything's back to normal. One caveat, it looks like apps that need configuration will not work. With PB2, I just hardcoded my server address and compiled a new .swf.

    Friday, March 8, 2013

    xpllib the CPU sucker

    Recently, I had time to reinstall the OS on our media server, which was acting a little sluggish. I upgraded the RAM from 2GB to the max 4GB and installed Windows 7 x64 instead of the 32 bit that was on there. With the extra headroom, I migrated some xPL apps from my HA server to this one, but what I discovered was surprising. The media server has a relatively modern Pentium dual-core E5200, yet periodic bursts of xPL traffic would spike the CPU utilization up to near 100%. Three apps of mine (xPLGMail and 2 instances of xPLGVoice), would each suck up about 25% of the CPU. This didn't happen when those apps were on the E6420 Core2 HA server, but I did see something like this years ago when I was putting some xPL apps on HP T5700 thin clients. The culprit was the xpllib dll. The CPU intensive one is version 4.4.3663.31835, but some other apps I wrote using version 4.3.2737.14049 rarely use 1% of the E5200 CPU. I ended up taking a step back and recompiling xPLGMail and xPLGVoice with 4.3. Just like that, those apps never used more than 1% of the CPU. This reminded me of some years ago, when one of the more recent xPL devs started building a whole new xpllib (V5). At that time, I was working on the T5700s (still in use) so there's no way I would use an even fatter, more CPU intensive xpllib. (V5 at 320KB is almost 8 times bigger than 4.3 at 44KB). Not only that, the new V5 is not backwards compatible. I wasn't going to rewrite 20+ apps to "move forward." In fact, it looks like I will be migrating all my xPL apps backwards from xpllib 4.4 to 4.3. There's no need to waste CPU cycles for equivalent functionality. I'm slowly getting back to HA and it feels good.

    Tuesday, February 26, 2013

    Where Have I Been and Where is Chumby?

    I've been here...working at another startup. It's been about 18 months and things really got hectic about December 2011 and hasn't let up since. All my HA work has been put on hold, with minor tweaks here and there. It's been my longest time away from HA since I started all this back around 1994, but I've done so much and everything just works. It's quite satisfying. Things are slowly easing up at work and I'm finding a little time and interest to do some more things, but it'll be a while before I'm able to commit as much time as before. One thing that really bums me out is the Chumby service has finally wound down. There's hope that it may come back, but for now there's just a clock widget. At some point, I will probably set up my own local server to serve widgets to our Infocasts, but there's no time for that right now. Chumby RIP for now.