Monday, January 13, 2014

HA Revamp Progress

It took about 3 days to get the W800 code migrated over from starCOMUltra and debugged. That's now sitting on an Edgeport 4 port serial to USB adapter connected to the Beaglebone Black, along with 1Wire, UPB and the RFID reader. The last major interface to write was for the JDS TimeCommander+, which I started on January 1. I completed most of it by Friday the 3rd so I was able to shut off starCOMUltra and let my engine run overnight all by itself for the first time. Since I had to go back to work the following week, I probably spent 15 hours a day working on it that last weekend. It's been over week now and I'm still cleaning up code, making the Python interface to the TC+ more reliable and fixing some events that didn't get correctly ported over from starCOMUltra. Last night, I put in the final piece of TC+ support by adding control of the IR Xpander. The next steps will be porting this engine over to x64. After that, hopefully I can start migrating everything to the new server.

Tuesday, December 31, 2013

Saying Goodbye to 2013 and 32-bit Windows

2013 is almost over and it really wasn't a very prolific year for me as far as HA was concerned. Work has been overwhelming this year and last, but I got some sporadic cycles to do some HA. I am, however, finishing the year with a flourish as I have some time off.

Just before the holidays, I picked up a new Haswell based i3 server to eventually replace our aging HA server. I installed Windows 7 x64 on it, but moving forward from 32-bit XP means leaving things behind due to compatibility. It's something I've dreaded having to do, but it's also a chance to rid myself of some of these Windows dependencies, as much as possible. The WinEVM software used to program our JDS TimeCommander+ (TC+) will get left behind, and starCOMUltra (SCU), which is THE heart of our HA system is based, won't be coming along either. I don't want the hassle of running XP in a VM just for a couple programs. While I've got an XP based netbook to run WinEVM and program the TC+, there's nearly 15 years of HA scripting tied into SCU that has to be migrated.

I have my own app, xPL SCU, which is a scripting engine bridging SCU and xAP/xPL where about 1/4 of my HA code resides and where I'll be moving most of the nearly 10,000 lines of code I've written in SCU. I already have variations of xPL SCU running on x64 in the form of my other scripting apps, xScript and xPLScript (basically the scripting engine minus the SCU piece). In order to leave SCU, I'll need to hopefully code my own interface to the TimeCommander+, and when that happens, I'll remove those references from xPL SCU and everything should just go. We'll see what roadblocks I bump into.

In the meantime, since I built all the interfaces to other hardware into the SCU script, I have to rebuild that somewhere else. As I mentioned in my previous post, I moved ZWave and 1Wire to a BeagleBone Black, and I'm continuing in that direction. I'm putting all the serial devices on the BBB and will use xPL to communicate with the HA server - to keep their interfacing code independent of whatever HA software I use now or in the future. I already prototyped this path with the ZWave interface, and since I can run the same Python code on the Windows server and the BBB (just changing the serial port reference), I can debug it without moving the hardware to the BBB.

As a result, I've made some good progress in a short time. Three days ago, I wrote a Python script to interface xPL and the RFID reader, put that on the BBB, and moved the related logic from SCU to xPL SCU. The next day, I did the same with UPB. Yesterday, I started doing this for the WGL 800. I have so much stuff triggering off of X10 wireless (DS10As, MS10As, etc.), debugging the migrated code is taking a while. In addition, there's a lot of cleanup to do. Lots to do to keep me busy well into 2014.

Happy New Year.



Tuesday, November 5, 2013

Time Flies

Has it really been almost four months since my last post? Things have been incredibly busy at work, and I haven't had much time to blog since summer. We had a major liquidity event recently, and the build up to that and the aftermath, has been an increased focus on the product I'm working on. Needless to say, the stupid crazy work schedule for a grossly understaffed project has been ramped up again. While it's great to have a cash reward for 2 1/2 years of manic, startup work, what I really want is time off.

I have been able to do some things here and there. One much needed housekeeping chore I did was cleaning up a bunch of wiring around the house and garage and putting all the exterior cameras and motions on a single 12V power supply (so they can easily be put on a UPS). Another project I've been working on is gradually cleaning up the jobs that run on my HA server, mainly in preparation to build a new one. It's a 6.5 year old Core2 Duo system running XP. Eventually I want to put in a new i3 or i5 server.

As part of that cleanup, I started testing putting some hardware on one of the BeagleBone Blacks I bought. I put the Aeon Labs ZStick on the BBB, installed OpenZWave and added xPL functionality to their Python sample code. After some testing, I moved ZWave permanently over from the HA server to the BBB and turned off my .NET app. Another thing I moved was 1Wire. I installed DigiTemp on the BBB to read the DS9097U 1Wire network and wrote more Python to dump the temperature data to mySQL. With that, I turned off another of my .NET apps.

Is there a pattern here? Possibly. I have gotten tired of .NET apps running on XP but not on Windows 7 or breaking after a Windows update. The ever growing, bloated Visual Studio tools are another negative. I don't think I can get away completely from .NET, but I can reduce some of the dependencies I have by coding in Python. Of course, coding in Python means a certain Premise user on CocoonTech (who doesn't understand blogs, thinks they're hard to read and should be organized by topic instead of date) will have an easy time stripping the headers and credits from my code and passing it off as his own.

I also funded this Arduino project on indiegogo getting 5 boards, 2 for the kids and 3 for future projects. Haven't had much time to work on them, other than powering up and downloading sample code to make sure they work. Recently, I've been searching EBay for various sensors for these boards to play around with. Borderless Electronics, has now come back with a follow up campaign, featuring various combinations of a kit, board and shield. I'm also contributing, to get some shields for my Arduinos, so they can be network enabled.

While on the topic of crowdfunded projects, I also backed the Pressy project. This is going to be a great way to kick things off on my Note II, like triggering my voice recognition home automation script.



Friday, July 12, 2013

What I Automate

This is something I've been meaning to write down, but never got around to. Let's see what I can remember:

Lighting:
  • Usual lighting control - control from switch, keypads, PCs, mobile devices
  • Night lighting - all rooms except the bedrooms have motion activated night lighting, which is overridden by light sensors
  • Back lighting - turning on backyard lighting when the back door deadbolt is unlocked and turning off when it's locked
  • Protocols - X10, UPB and ZWave

    Notification methods:
  • Whole house text-to-speech (TTS) announcements - for announcing weather, reminders, warnings, etc during normal waking hours
  • Whole house sound effects - various different notifications sounds for mailbox being opened, doorbell ring (changes with the season), motion on the porch, etc.
  • Zoned TTS announcements - Insignia Infocasts and SqueezeBoxes in bedrooms and living areas for targeted notifications
  • On screen displays (OSD) - Infocasts, PCs and TVs also display announcements in case they can't be heard (watching a movie, blasting the stereo)
  • IM - notifications are sent via IM when away from home and Internet connected
  • Email - when attachments are part of the notification
  • SMS - Google Voice sent SMS for when we're out and not Internet connected
  • multiple - depending on the severity of the notification, multiple methods may be combined. OSD & TTS typically happen for most notifications when someone's home

    Environment:
  • HVAC - disable HVAC when alarm armed in away mode or when a window is opened, warn if someone's trying to turn it on when a window is left open via TTS, OSD, IM
  • Fans - 1Wire temperature sensors in each bedroom control when to turn off any fans running on summer nights
  • Exhaust fans - turn on/off laundry room & bathroom fan based on humidity, guestbath fan based on duration of occupancy ;)
  • Leaks - wetness sensors under sinks & in bathrooms and alerts sent via all notification methods

    Info/Status notifications:
  • Temperature & weather for the day - TTS announcement in the morning as we come downstairs
  • Google calendar appointments for the day - same as above. as calendar events hit their reminder time, notifications are sent via IM and OSD.
  • Phone calls while we're out are announced as we enter the family room
  • Mailbox - OSD & sound on event. TTS reminder later to check it if we haven't gotten the mail.
  • Doorbell - OSD & sound effect on event. TTS to tell us the doorbell rang while we were out
  • Temperature - sound effect when it is cool enough to open the windows on days over 80°F
  • Washer finished - TTS, OSD, IM
  • Garage door left open with no activity for a period of time - TTS, OSD, IM, SMS
  • Front door left unlocked for a period of time - TTS, OSD, IM, SMS

    Telephony:
  • Google Voice forwarding - when arriving home, Google Voice forwarding for the home phone is enabled, when leaving it's disabled. arriving & leaving work also changes GV forwarding to my work phone.
  • Caller ID - incoming calls are checked against all our Google contacts, and if it's a match, the person's name (number if no match) is displayed & announced via TTS, OSD and IM.
  • Intercom - wired phones and softphones for calling specific rooms from other house/softphones
  • Mute/Pause AV - when the main phone is picked up.

    AV:
  • TV - automatically turn on when DVD player is turned on (but not off since we may watch something else after). game consoles are connected to one tv that is turned on & off when the consoles are.
  • Whole house audio - via Squeezeboxes
  • Uptime - all AV equipment in the house are monitored by current sensors so their actual power on status is known. uptime, last on time, and last off time are tracked.

    Dog:
  • Feeding - automated feeding 1 hour after our morning run, when I've already left for work. when the feeder is used, I get an SMS confirmation and a picture of the happy eater
  • Outdoor bark deterrent - when barking is detected by the side gate, verbal corrections are played by the garage speaker and a stream of water is shot at the gate area.

    Nanny:
  • Kids' laptops - when on, the kids are reminded every 20 minutes to get up walk around and look away from the screen
  • Game consoles - when on, the same 20 minute reminder is announced (and the TV is muted to reinforce the message)

    Cars:
  • RFID - tracking home & away status of cars - TTS, OSD, IM announcement when a vehicle arrives home

    Irrigation:
  • Run time - adjusted based on rain, weather forecast & past temperatures

    Security:
  • CCTV + DVR - typical motion based recording of cameras
  • Away notifcation - Email snapshots of particular events (doorbell, mailbox, porch activity, etc) when we're out
  • Triggered - all sorts of notifications

    Power monitoring:
  • Real-time - whole house power monitoring split into 7 zones
  • Tracking - local via rrdtool & canvas.js, cloud via automatic updating to Google Drive spreadsheet
  • Oven - warning via TTS, SMS if arming house in away mode and the oven is on
  • Solar production comparables - in progress, comparing current production of array to past to determine if cleaning is necessary

    Security:
  • Locks - deadbolt state monitoring
  • Reminder - warning if we unlock a door and the alarm is still armed in home mode (so we don't accidentally set off the alarm and wake up the neighborhood)
  • Away lighting - replay lighting events from some randomly chosen log file
  • AV - turn off AV equipment when armed in away mode

    Computers:
  • Backups - typical automated local and remote backups
  • Monitoring - cpu utilization & temperature, hard drive utilization & temperature, uptime, low disk & memory notifications, automatic killing of runaway processes on servers
  • Internet - bandwidth & connection monitoring, auto power cycle modem & router if connectivity is lost
  • Power - automatic hibernation of certain powered on machines when alarm is armed in away
  • Wake from hibernation - automatic waking of my laptop from hibernation upon first motion in the master bathroom in the morning or arrival home of my cellphone
  • Email - TTS, OSD notification of new emails
  • Craigslist bot - periodically scans CL for things I'm looking for (found a $10 and $20 Squeezebox this way!)

    Occupancy/Presence:
  • Motion sensors - some hardwired, many wireless all over the house & around the house
  • Audio - sound detectors upstairs and in the living room
  • Video - one camera connected to a hacked Seagate Dockstar is being used for a specific motion detection purpose
  • Cellphone - via bluetooth, mentioned above
  • Car - via RFID, mentioned above

    Control methods:
  • Android speech recognition - using our phones & tablets (and SL4A+Python) to voice control lighting, AV equipment and various appliances, query status of devices and control audio players and music selection
  • IM - using the same language parsing as the speech recognition, everything that's voice controllable can be controlled via IM
  • AJAX Floorplan GUI - control & view status of light/appliances/AV/HVAC from cellphones, tablets, PCs - basically anything with a modern browser. see status of doors/windows/locks/temperature
  • Insignia Infocast (Chumby) - used as touchscreen control panels with our Panel Builder app and "offline firmware" (since Chumby service is essentially dead)
  • Wireless keypads - X10RF & ZWave based keypads
  • IR - all Squeezeboxes are IR receivers and can broadcast the IR codes they receive via the xPL plugin

    2nd Home:
  • Minimal automation - lighting, occupancy, cellphone presence, leak sensors
  • Remote link - all status is updated to the main home via IM and lighting can be controlled over IM

    Work:
  • Cellphone tracking - running on my office laptop, IMs my presence to the HA server when my cellphone is around, to adjust Google Voice forwarding

    Old stuff (no longer in use):
  • Baby monitor - broadcast through whole house speaker system and local Shoutcast channel (for listening on a PocketPC PDA when I was out in the yard - yes it's been a long time since the kids were babies), automatically turns on when drop side crib (now banned) door raised and off when someone enters the room (and back on after a period of inactivity if the crib door is still raised)

    This is just a quick list. I'm sure I'll add to it when I have more time... I need to annotate this with some pics and links too (in progress)
    This is a screen cap of the main screen for our Insignia Infocasts (aka Chumby) that we have around the house


  • Saturday, June 22, 2013

    Washed the Solar Panels

    I've been meaning to wash the panels for a while now, but a Google study I read said the gains were minimal. They've been installed for over three years and there's a good layer of dust and some bird poop on them. I'm sure if I cleaned them, I'd get some increased production. In June three years ago, we would occasionally get over 25 kWH per day. Now, we are seeing just over 22 kWH. That's a pretty bad drop, almost 12%! Panel output naturally degrades over its lifetime, but this exceeds what I should see (The panels are guaranteed to produce at least 90% of their wattage for 10 years and at least 80% for the 15 years after). I've been wasting money all this time! I finally got around to getting telescoping a car wash brush:


    The brush extends to 75", but since the panels are on a 2nd story roof, I can only reach the first 8 panels without creeping near the edge of the roof. I ended up zip tying the brush to an extension pole we used for painting, and I was easily able to reach to the end of the 2nd row of panels. At 6AM this morning, I climbed up on the roof with the brush, the pole and the hose and washed away. In 15 minutes, I was done as well as I could be. The brush is very soft and there still seemed to be some dirt I wasn't able to scrub away, but it is a lot cleaner. Today will be cloudless and mild, like yesterday and the day before, so I'll be able to make a good comparison on the effects of the cleaning. So far, at 9AM, the panels are producing 2245 W and total power produced is 3.033 kWH. Compared to yesterday at 9AM, the panels were delivering 1965 W and had produced 2.574 kWH. Instantaneous output has gone up 14% and total production is up 17%. Pretty good! We'll see where the day's production ends up, and I'll post updates for below. It certainly looks like I'll be making periodic trips onto the roof to wash the panels.

    TimeToday
    Watts
    Yesterday
    Watts
    % ChangeToday
    kWH
    Yesterday
    kWH
    % Change
    9AM22451965+14.2%3.0332.574+17.8%
    1PM29732726+9.1%14.26412.861+10.9%
    4PM19131673+14.3%21.92419.826+10.6%


    Update: At the end of the day, 24.5 kWH of electricity were produced versus 21.99 kWH yesterday, an 11.4% increase. That's pretty significant. It's as if I added 2 more solar panels to the 16 we already have. The question is, when should I get back up there and clean them again? One interesting bit from the above table is that dirty panels are actually less productive in the morning and evening. In other words, the grime on the panels obstructs more sunlight when the sun is not directly above them. I'm sure there's some high school physics explanation behind that, but I'll let someone else figure it out :)

    Monday, June 17, 2013

    Major Milestones Along The Way



    I was cleaning up some stuff in my office and came across the invoice for my JDS TimeCommanderPlus (TC+), and it made me think about how our HA system got where it is today. September 1995 - that's when the madness began. Although I was playing with some wireless X10 stuff a little earlier, the real automation started with the TC+. To this day, it's still chugging away, handling X10 and IR (with an IR-Xpander2), along with some digital inputs and a few relays.It was great, for a time. The TC+ native software package, WinEVM, was practically a Windows 3 program and never evolved beyond that. Not being a software guy, I was content to use the WinEVM point-and-clunk interface to write code (later dubbed "starglish" in reference to the follow on JDS Stargate) to download to the TC+. It had hooks to allow it to play sounds files on a serial connected host PC plus the ability to issue shell commands. I created a bunch of batch files to take advantage of that, but not much else.

    A major leap forward came when I started beta testing a software package called starCOMPlus in 2001 (or was it 2000? I can't remember). starCOMPlus exposed all the devices in the TC+ through DCOM, allowing users to create standalone apps that can control and query all aspects of the TC+. It also gave the ability to create web interfaces, and I learned how to build ASP pages to control X10, relays and IR and read the digital and analog inputs. starCOMPlus also introduced what the developer called a "hosted" script, which also had access to the TC+ devices and would run while the app ran. It allowed offloading and expanding of functionality from the TC+ to the host computer using a "real" language (jScript or VBScript - I chose jScript). From there, the number things I could automate and interface to exploded.

    In 2003, I stumbled upon xPLRioNet, an alternate server for the Rio Receiver (RR). The RR was one of the first networked MP3 players, and I had picked up a couple being liquidated around 2001. xPLRioNet (later called MediaNet), was the best of a few alternate RR servers. It featured this neat thing called xPL, that allowed status and control messages to be passed around my home LAN. As I found out, there were many xPL apps that expanded HA beyond what I had known. I could now have distributed nodes tied together by xPL. I was hooked.

    Four years later, I figured out how to write an AJAX app, and put together the pieces of what became our floorplan GUI. During that process, I taught myself PHP, JavaScript, DOM manipulation and especially mySQL, which has become an integral part of our HA system. With xPL providing the first piece of a distributed HA system, mySQL became the persistent state of the system, accessible and changeable by any of the nodes. Now, I was no longer tied to having Windows nodes.

    In early 2008, I started learning to write my first .NET app. I had been using a copy of HAL Deluxe that I had found on liquidation for about $10, but the user interface was a piece of crap. It was another point-and-clunker, so my first app exposed HAL's devices to a scripting engine (much like what starCOMPlus did for the TC+) and tied in xPL. While I was writing that app, I was also using that knowledge to write another app linking starCOMUltra (the sequel to starCOMPlus) to xPL & xAP, and building in another scripting engine. To that point, I had been using other apps to script xPL interactions: xPLHAL and xAP Floorplan. Neither provided the free-from scripting that starCOM* did and I craved, so I wrote my own. Once I got my feet wet in .NET, I was churning out applications like crazy & the functionality of our system took another exponential jump.

    It has been been nearly non-stop HA since then (as you can see from all the blog entries), except for the work induced hiatus last year. As always, I'm on the lookout for new ideas to implement (although I'm notoriously frugal!).

    Sunday, June 9, 2013

    Selecting Music to Play on a Squeezebox Using Android Speech Recognition

    This is a short clip showing two examples of using Android speech recognition to select music to play on a Squeezebox (actually SliMP3). It shows how the server side maintains context of the current operation. I ask it to play 30 Seconds to Mars, and the server queries the Logitech Music Server database for their albums. It returns them and my phone asks which album I want to listen to. I reply "This is War" and then am asked where I want the music played ("What zone?"), to which I answer guestroom. The server then queues up the album and launches it in the guestroom, turning on the SliMP3 and the powered speakers. In the second example, I tell it a particular album (Ride the Lightning) and location in one sentence. The requested album is launched in the guestroom.

    On a related note, here's sample SL4A code that passes recognized speech to an IM address and speaks the responses received. It's the basis of what runs on my phone.


    Saturday, June 8, 2013

    New Toy: BeagleBone Black

    It's been a while since I got a new toy and I've been eyeing a Raspberry Pi, for no other reason than everyone seems to have one. A few months ago, I read about the BeagleBone Black (BBB), a revised version of the BeagleBone with more power and a much cheaper price of $45, barely more than a Pi. Sold! I just had to find a place that had them in stock. I eventually found a site, Special Computing, that had the BBBs for $43 each (has since gone back to MSRP) with just $3 first class USPS shipping for 2 (I always tend to buy these type of gadgets in pairs for some reason - Quatech serial servers, Rio Receivers, 3Com Audreys, Insignia Infocasts, Seagate Dockstars...) I ordered them Sunday night, they shipped out Monday from Arizona and arrived at my office in Silicon Valley on Wednesday. It came too fast! I usually have time to do a bit of research to plot out what I'm going to do with my new toy before it arrives.

    Last night, I finally had some spare cycles and got to work. I knew I didn't want to use the Angstrom Linux that comes pre-installed on the BBB - I wanted something with access to the most recent Linux packages. I figured I'd go for Ubuntu. I wanted to see how a desktop would run on it anyway. I went with a pre-built Ubuntu 13.04 image and followed these directions to install it on an 8GB microSD card I had lying around using my wife's Linux laptop. (Yes, I have my wife, and 10 and 12 year olds, running Ubuntu on their laptops instead of Windows!) Next thing was to get the BBB booting off the uSD card. Apparently, there's button on the BBB you push to force it to boot from the uSD instead of the onboard eMMC, but that's not going to work for unattended use. Instead, I found this method to easily get the BBB to automatically boot from the uSD card: connect the BBB to your computer like a USB drive, but instead of deleting the MLO file, I just renamed it MLO.save - in case I want to boot from the eMMC in the future.

    After disconnecting the BBB from my laptop, I put the uSD in it, powered it up and booted Ubuntu from the card. Shellinabox comes up by default allowing me to log into the BBB in a browser window. SSH didn't come up so I enabled that with "sudo update-rc.d ssh defaults" and went through the process of adding a user ("sudo adduser ...") and adding that user to the sudoers file. Then I set up a light window manager ("/bin/bash /boot/uboot/tools/ubuntu/minimal_lxde_desktop.sh"), installed vnc ("sudo apt-get install vnc4server") and setup VNC to use lxde. I also wanted bluetooth support ("sudo apt-get install bluez") and needed Python pexpect for a project I want to do (Python was already installed in the Ubuntu image). Pexpect came as a .deb file so I needed to installed dpkg ("sudo apt-get install dpkg") to be able to install pexpect ("sudo dpkg -i python-pexpect_2.4-1_all.deb").

    That's how far I got last night. I was able to VNC in, see the lxde window manager come up, launch Chromium and log into GMail in slow motion. Web browsing seems a bit much for this platform. I have yet to hook it up to a monitor (I need to get a mini HDMI to HDMI cable). My immediate use will be to experiment with the TI SensorTags. The tags use bluetooth low energy (BLE), which is built into Linux kernels 3.5 and higher.


    Tuesday, May 28, 2013

    Speech Recognition of Numbers for Timed Events

    One of the knocks against using speech recognition for home automation is that you could have used a different method, like a touchscreen or remote, for more efficient control. However, timed events are one of those use cases that I find better with voice control. Let's say I want to turn on a light for 20 minutes. With a GUI, I have to select the light and then select a mode (delayed - turn on after X seconds, interval - turn on for X minutes then turn off). To select the delay/duration, I would need some sliders, text boxes to type the time or maybe a drop down list of predefined times. If I want to schedule with days or dates, then I would need a date picker/calendar as well. Or, I could just say "At 6PM on Sunday, turn on the porch light for 15 minutes."

    Continuing with my recent experiments with Android speech recognition, I began adding timed events. One advantage of a free form speech recognition engine like Google's, is the ability to recognize any number that's spoken. You're not limited to a set of predefined options, like with Homeseer:
    <1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|30|40|50|60|70|80|90> <seconds|second|minutes|minute|hours|hour|days|day>
    
    Kind of a nitpick, but what if you want 25 minutes? Can't do it, but Google will recognize whatever you say, whether it's 3 fortnights or 2,567 seconds. It may not be necessary, but it gives you the flexibility to do whatever you want. It's up to your software to parse out the numbers and units. With Python, it's simple to recognize the sentence for a particular pattern and extract the necessary parameters. Tthe following code shows how to extract information for basic delayed/duration type of events.
    regex_delay = re.compile('[^\s+]*(in|for|after)\s+(\d+)\s+(day|hour|minute|second)[s]*')
    if regex_delay.search(msg):
      delay_parm = re.findall(regex_delay, msg)
    
    delay_parm will now contain a list of groups. If your command is "turn off the garage light after 15 seconds", you'll get this:
    delay_parm = [('after', '15', 'second')]
    delay_parm[0][0] = after  # delay type
    delay_parm[0][1] = 15     # delay value
    delay_parm[0][2] = second # delay unit
    
    Now you have all the information you need to perform the action:
    # convert to common unit, seconds
    if delay_parm[0][2] == "day":
      delay_time = int(delay_parm[0][1]) * 60 * 60 * 24
    elif delay_parm[0][2] == "hour":
      delay_time = int(delay_parm[0][1]) * 60 * 60
    elif delay_parm[0][1] == "minute":
      delay_time = int(delay_parm[0][1]) * 60
    else:
      delay_time = int(delay_parm[0][1])
    
    if delay_parm[0] == "for":
      do_something_for(delay_time)
    else:
      do_something_delayed(delay_time)
    
    Scheduling an event based on a day ("next Tuesday"), a time ("at 3PM") or date ("December 31, 2014") is just an extension of this. Take a look at this demo where I'm showing a time based reminder and a delayed lighting event.


    Saturday, May 18, 2013

    Generic Speech Recognition Script for SL4A

    I've finally had some time to make a generic speech recognition script that hopefully any SL4A capable Android device can use. I've taken parts of my script from the previous post and added in some sample pattern matching from my server side script. The result is a script that can issue commands to your home automation controller or software by fetching URLs. A couple prerequisites: you need you must have SL4A and Python for Android installed on your device. It would help to be familiar with some Python and its regular expression syntax. The code is well commented and has samples for recognizing phrases like "turn off the kitchen light" and "turn the master lights off" - so hopefully that's enough to kickstart your automating. So go ahead and get it!

    Saturday, May 11, 2013

    More Two Way Interaction With Android Speech Recognition

    Let's start with the demo first. The video shows commands and queries being spoken and recognized by Android speech recognition. Our web GUI is also in the shot - since the wife forbids me to walk around filming the insides of our house for the world to see :) - so you can at least "see" some of the status being queried and the results of some actions. There are some annotations on the video, but you can't see them on the embedded player. Click through to YouTube to see the video with annotations.



    What makes all that stuff work is the queries are passed to a server for processing. That gives the opportunity for two way interactions where you can not only control your system but query it as well. As I mentioned in my previous post on this topic, using IM as a transport mechanism allows the recognized phrase to be sent to the server and the responses sent back to the Android device. Over on our server, EVERY device and its state is logged in our MySQL database. This was done when we built our AJAX based GUI. Also, since our system is distributed, MySQL provides a place for status to be updated and synced between various devices. Below is a snapshot of a phpMyAdmin page showing part of one of our database tables. The table contains the device name, type, its state and when it was turned on and off.



    Every device and its state is stored: every light, appliance, AV device, motion sensor, door, window, lock, car, phone, computer, etc. Whenever a device's state changes, a function gets triggered in whatever software is interfacing to that device (which is for the most part Windows scripting like the jScript below):

    function setStatusOnOff(device,type,state,secs) {
        try {
            if (state=="off") {
                mysqlrs.Open("insert into status (device,type,state,secs_off) values ('"+device+"','"+type+"','"+state+"','"+secs+"') on duplicate key update state='"+state+"', secs_off='"+secs+"'",mysql);
            } else {
                mysqlrs.Open("insert into status (device,type,state,secs) values ('"+device+"','"+type+"','"+state+"','"+secs+"') on duplicate key update state='"+state+"', secs='"+secs+"'",mysql);
            }
    ...
    }
    

    Since the device name is stored as a normal non-abbreviated name ("family room tv" instead of "frtv"), it's straightforward to use the recognized speech to search for devices using MySQL queries. The next step is to figure out what type of command is being issued. For example, a command will have the phrase "turn on" or "turn off" in it. Since I use Python on the server to process the speech, I use its regular expression (regex) functions to pattern match for commands:

    reTurn = re.compile('(^|\s+)turn.*\s+o(n|(f[f]*))($|\s+)',re.I) # recognize "turn on", "turn this and that on", "turn this and that blah blah blah off" or just "turn off", even "turn of" anywhere in a sentence
    

    After figuring out if it's a command or query, my script then strips out extraneous text to simplify extracting the device and type. What gets stripped out depends on how things are phrased in your household. Here's a snippet to do that, where msg is the recognized phrase:

    msg = re.sub('(^|\s)+(turn)|(the)|(a)|(can)|(you)|(please)|(will)\s+',' ',msg) # strip out unneeded text
    

    I'm experimenting with natural language processing to strip out unnecessary words automatically but it's not ready yet. Next, the script figures out the type of device involved. For lighting, it would use a regex similar to this:

    reLight = re.compile('\s+(light[s]*)|(lamp[s]*)|(chandalier[s]*)|(halogen[s]*)|(sconce[s]*)($|\s?)',re.I)
    

    Since all the extra words have been stripped out and the type has been determined, all that's left is to formulate a MySQL query like this to get the actual device name:

    msg = re.sub("\s+","%",msg) # replace spaces with wildcard character %
    if reLight.search(msg):
      query="select * from lighting where device like '%"+msg+"%'"
    

    This is necessary to remove ambiguities in the recognition. A light may be named "guestbath" in the HA system, but Google may pass the recognized phrase as "guest bath." With the actual device name, the final steps are to issue the command and send a response back to the Android device. As lights and other devices are added to HA system, nothing else needs to be added. Contrast that with other automation systems where you have to setup a recognition phrase for every device and possibly every state in your system. In our system, new device names will be parsed out of the database, and no changes are required on the Android device. Queries also follow a similar flow, except instead of issuing a command, a response is formulated with the status and sent back to the user.

    That's the backend. I'll cover the frontend in another post.

    Thursday, May 9, 2013

    Using CanvasJS to Graph Power Consumption

    We've been using RRDtool for graphing everything from temperatures, to disk usage, to power consumption. It's very powerful and makes some nice charts, but I can never remember how to set up the database. Plus, my server is constantly running the tool to generate the graphs every 15 minutes so it's relatively up to date when someone views them. I'm now playing with CanvasJS which uses HTML5 and JavaScript to easily generate some really cool graphs. I'm using power consumption as my test bed for implementing CanvasJS. Data for the power consumption is dumped into our MySQL database every 2 minutes (it's actually coming in every second, but I'm only sampling the data every 2 minutes for this graphing application). With some JavaScript and PHP pulling the data out of MySQL, the charts are generated on the fly. It works REALLY well and it's fast. You can pan and zoom the chart to see the exact power usage at a specific time. Check out the gallery for more samples with code. I will probably transition all our system's graphing over to CanvasJS, after I have more time to experiment. In the meantime, here's a short video showing the power consumption graphs I'm working with.