A Magic Mirror Powered by AIY Projects and the Raspberry Pi

Machine Learning imprisoned behind a sheet of glass

Alasdair Allan
21 min readOct 31, 2017

This the first post in a series of three posts building a simple voice controlled Magic Mirror. This first post in the series shows how to put the mirror together, while the second post in the series looks at how to use Machine Learning locally on the device to do custom hotword recognition without using the cloud. The final post in the series looks at integrating the Voice and Vision Kits together to build a mirror that recognises when you’re looking at it.

Back at the end of August, just ahead of the pre-order availability of the new Google AIY Project Voice Kit, I finally decided to take the kit I’d managed to pick up with issue 57 of the MagPi out of its box and put it together.

I’ve been playing around with the Voice Kit ever since. Inspired by the 1986 Google Pi Intercom build put together Martin Mander, I even built my own retro-computing enclosure around the Voice Kit using an iconic GPO 746 Rotary Telephone.

However with the new Voice Kits arriving on shelves at Micro Center over the weekend I’ve been thinking about other possible projects, builds that I always wanted to do but never got around to doing, and the one that immediately sprung to mind when thinking about the Voice Kit was a Magic Mirror. If only because the Voice Kit seems to be the way to control a mirror in a way that really does seem like ‘magic.’

The completed Magic Mirror along with my previous Google AIY Project Voice Kit builds.

A lot of Magic Mirror builds are complicated, they use a full sized LCD flat panel monitor, and require a lot of wood working skills. However considering my limited woodworking skills I was after something a bit more modest, best to start small. A small mirror, and a smaller screen to go with it.

Since I had a spare Raspberry Pi 7-inch touch screen sitting on the shelf in my lab I decided to build the mirror around that. Now 7 inches is probably a little too small, that’s perhaps more of a shaving mirror than a magic mirror. But I figured I could embed it in one corner of the frame of a larger mirror, and then use it for status messages and other notifications.

Gathering your Tools and Materials

This project has more woodworking than electronics. You’ll need a small Philips “00” watch maker’s screwdriver, a craft knife, scissors, a set of small wire snips, black electrical tape, a saw, mitre block, speed square, and possibly some Sugru and a spudger if you’re feeling ambitious.

Getting your hands on Two-Way Acrylic

While you might have pretty much everything else to hand, unless you’re in an interesting line of work, you won’t have any two-way acrylic. However it’s actually pretty easy to get your hands on, you can order it in custom sizes online in the United States and in the United Kingdom and it’ll arrive in a few days pre-cut and ready. If you’re unsure about your carpentry skills, put your enclosure together first before ordering the two-way acrylic and then you can order the exact size you need to fit your enclosure.

Building the Enclosure

If you’re not entirely sure what you’re doing it’s often easier to start with something pre-built, and then go ahead and then modify it to your own purposes than it is to start entirely from scratch.

The lip of the box picture frame.

In this case I decided to grab a simple box picture frame from a local DIY store and start from there. The box frame I picked is designed to hold a 30cm×30cm square picture. The frame has clips around the edge to hold in a front perspex sheet and a hardboard backing sheet behind that. The picture gets sandwiched between the two.

Deepening the frame with additional .

The frame also has a lip to hold in the perspex sheet, perfect in fact to hold a similarly sized sheet of two-way acrylic.

Unfortunately while the box frame is quite deep, it’s not going to be deep enough to hold all the electronics.

Cutting the extending wood pieces

While we could shave some height off our electronics, if we wanted a really slender build we could go ahead and desolder the things like the Ethernet and USB jacks on the Raspberry Pi, that seems like an awful lot of trouble for a prototype.

Perhaps if I like the build I’ll settle down and do a much lower profile one in the future. However for now the easiest thing to do is to deepen the frame by adding additional wood.

Fixing and gluing.

Fortunately I had some 25mm × 15mm stock on hand from a previous project which fitted neatly around the edge of the existing box frame.

Using a Mitre block to get nice clean 45° degree angles I cut the frame to length, then fixed and glued it in place. While the glue seemed to be strong enough I also tacked the frame together to add additional strength.

Painting with wood stain.

I also made sure to leave a cut out on one of the sides so that I could thread power (and other) cables through cleanly.

Afterwards I stained the fresh wood using a black wood stain to match the existing box frame.

I also cut a backing 32cm×32cm plywood sheet using a laser cutter and stained it with the same black wood stain.

The backing plywood sheet will be used to close up the build once everything is inside. Not strictly necessary, but it makes things neater. If you want to do likewise, but don’t have access to a laser cutter, you can now actually order pre-sized and cut plywood on Amazon if you don’t want to source a sheet and cut it to size yourself.

Leaving the frame to dry overnight we move onto the guts of the build.

Opening the Box

Ahead of the new Voice Kit hitting the shelves at the weekend I managed to get my hands on a few pre-production kits. The new AIY Voice Kit comes comes in a box very similar to the original kit distributed with the Mag Pi magazine. The box might be a bit thinner, but otherwise things look much the same.

The new AIY Project Voice Kit.

The only component swap was the arcade button, gone was the separate lamp, holder, microswitch and button — all four components have been replaced by a single button with everything integrated. Since it was somewhat fiddly to get that assembled the first time around, this is an improvement.

But other than that, things went together much as before.

While my pre-production kits didn’t include it, the retail version should have a copy of the “MagPi Essentials AIY Projects” book written by Lucy Hattersley on how to use the Voice Kit with your Raspberry Pi.

Preparing the 7-inch Display

Embedded behind glass, our 7-inch display really doesn’t need it’s digitizer panel. So I decided to remove it, and very carefully with a spudger I pried the digitizer away from the display panel. I then used black electrical tape to cover the metal edges—this will both prevent scratching when we put it up against the mirror, but more vitally it’ll help reduce the amount of reflected light behind the mirror.

The official Raspberry Pi 7-inch display, without the front digitizer panel.

If you don’t feel confident about removing the digitizer panel, or quite frankly think it’s all too much work, you don’t have to do it.

Cable to the digitizer disconnected from the main LCD panel.

Instead you can just detach the cable from the display board to the digitizer and leave it hanging. Everything will work just fine, but the display won’t be a touch screen any more.

Putting Everything Together

We’ve reached the point where everything starts to come together quickly. Grab your two-way acrylic panel and insert it into your box frame. Then take your display panel and tape onto the inside of the mirror.

The way two-way acrylic is reflective, and looks more-or-less like a normal mirror, from the well lit side and is see through from the side with no light.

However things close to ‘back’ side of the mirror are visible through the reflection just because of scattered light. Unlike a lot of magic mirror builds our screen doesn’t cover the entire mirror. So we have to black out the rest of the mirror.

The back side of the mirror. With the 7-inch display taped in place at bottom left with a blackout sheet above it.

To do this I used some black plastic sheeting I had on hand, and it was as simple as taping it in place with a slight overlap on to the back of the display assembly. If you don’t have plastic sheeting you could probably use black cardboard. But bear in mind things inside the mirror might well get hot, using a black fabric or other materials with a low ignition temperature might not be the best idea.

All of the blackout shielding in place and Raspberry Pi connected to the display.

Once all the blackout material is in place you can start adding the electronics behind the plastic backing. It should be thick enough so that reflected light—and direct light from the LEDs on the Raspberry Pi—won’t show through to the front of the mirror.

All of the electronics taped in place inside the mirror.

Once everything is in place with tape we can close up the mirror using our plywood backing panel. Although you might also want to think about drilling some ventilation holes in the back panel, and fitting a small passive heatsink to the Raspberry Pi, before you do that. You might also want to hold off with that final step until you’ve got network access to the Raspberry Pi.

I’ve talked through how to configure the AIY Projects Voice Kit installation without a monitor in a previous piece. However unless you’ve sealed it up already, we still have access to the back of the mirror, so you could just plug a spare keyboard and mouse into the back and work directly.

The completed mirror booted into the Raspbian.

Once you’ve got it connected to the network go ahead and test the audio and cloud connectivity using the desktop icons. If everything works we’re ready to push on, and if you’re happy with how things look you might want to think about replacing some of that electrical tape with some Sugru, or otherwise fixing things in place a bit more permanently before closing the mirror up for good with some tacks.

Connecting to the Cloud

We can go ahead and set up Google Cloud Platform as we’ve done before on other Voice Kit builds using the console.cloud.google.com developer console. However for this build, as well as the Google Assistant API, we’ll also be using the Google Cloud Speech API.

Unfortunately for those of us based in the European Union, the Cloud Speech API is not available unless you’re signed up to the Google Cloud Platform as a business. This isn’t a technical issue, it’s a legal one, so it’s not one you can work around at this point.

But don’t despair if that means you can’t use the Cloud Speech API, just ignore what follows and skip ahead, I’ll talk about alternative ways around this later.

Assuming you can make use of it, you can use the API Library to find the Cloud Speech API and enable it for your project.

Enabling Google Cloud Speech API

However unlike the Google Assistant API which is free to use, at least until you’ve reached the (fairly generous) daily quota, the Google Cloud Speech API is not free.

Google Cloud Speech is not free!

You’ll need to enable billing to support it in your project.

If you’ve never set up billing before you’ll need to do that now.

However adding billing to your Google Cloud Platform developer account is actually pretty easy, and signing up to billing for the first time Google will give you $300 in credit spend over the next 12 months. Which will at least let you sit down and test your mirror before deciding whether you want to use it.

You’ll need to create a payment profile and provide some credit card details.

Picking up $300 of free services for signing up for billing.

Once you’ve created a billing account you can go ahead and enable Google Cloud Speech API for your project.

Even once your $300 of credit is used up, you won’t be automatically billed.

Once it’s enabled we need to go ahead and create a Service Account Key for the Cloud Speech API. Click on the ‘Credentials’ tab on the right hand side of your screen.

The Google Cloud Speech API is now enabled.

Then in the ‘Create credentials’ drop down select ‘Service account key.’

Creating a Service Account key.

This takes you to the next page where you can can create the Service Account.

Creating a Service Account key.

Fill in the project details and click ‘Create.’ A popup window will then appear with your credentials, don’t panic when this disappears as this isn’t your only chance to grab them. Dismissing the pop up by clicking “OK” leaves you in a credentials list with your newly generated credentials which you can then do ahead and download to your device.

Find the JSON file you just downloaded, it’ll be named magicmirror-XXXXXXXXXXXX.json. Rename this file to cloud_speech.json, then move it to /home/pi/cloud_speech.json.

Installing the Magic Mirror

Now we have everything in place to work with the Voice Assistant and Cloud Speech API, let’s take a step back and take a look at the Magic Mirror software we’re going to use in the build.

MagicMirror² is an open source modular smart mirror platform. Originally put together by Michael Teeuw, and covered in issue 54 of the MagPi, there is now an active community around the project and a number of off the shelf third-party modules that you can install and solid developer documentation.

Installation is as simple as opening a Terminal window on your Raspberry Pi’s desktop and then downloading and running the installation script.

$ bash -c "$(curl -sL https://raw.githubusercontent.com/MichMich/MagicMirror/master/installers/raspberry.sh)"

After installation go into the MagicMirror/config directory and edit the config.js script to remove most of the modules and open up access control to all network interfaces and IP addresses. Don’t worry, we’ll fix that huge security hole after we’ve done configuring everything, but for now it just makes our life a little bit easier.

var config = {
address: "0.0.0.0",
port: 8080,
ipWhitelist: [],
language: "en",
timeFormat: 24,
units: "metric",

modules: [
{
module: "clock",
position: "bottom_left"
},
]

};

/*************** DO NOT EDIT THE LINE BELOW ***************/
if (typeof module !== "undefined") {module.exports = config;}

Save the cut down config.js file and restart the mirror software,

$ cd MagicMirror
$ npm start

and you should see a clock ticking away on the bottom left of our mirror.

First run for the Magic Mirror with just the Clock module configured and active.

We’re getting there, and now our mirror is working we need a way to remotely control it so we can manipulate what’s on the screen using the Voice Kit. Fortunately there’s a third party module that lets us do just that.

Adding Remote Control

Hit Ctrl-Q to close the Magic Mirror and go ahead and install the Remote Control third-party module directly from its Git repo.

$ cd ~/MagicMirror/modules
$ git clone https://github.com/Jopyth/MMM-Remote-Control.git
$ cd MMM-Remote-Control
$ npm install

After installation you’ll need to edit the MagicMirror/config/config.js to add the module,

{
module: 'MMM-Remote-Control',
position: 'bottom_right'
},

before restarting the Magic Mirror.

$ cd ~/MagicMirror
$ npm start

After restarting the mirror the end point for the remote module is displayed in the lower right of the screen.

My magic mirror with its new remote control.

Since we went ahead and disabled access control by IP in the config file you should be able to connect to the mirror and access the remote interface by going to http://192.168.xxx.xxx:8080/remote.html.

However the Remote Control module doesn’t just add a graphical interface, it adds HTTP GET end points to allow us to perform a number of actions, including hiding and showing other modules. We’re going to use this to make our Magic Mirror, well, magic and show us—rather than tell us—about the weather.

Adding Weather

To make use of the Current Weather and Weather Forecast modules we need to go ahead and sign up for a OpenWeatherMap account. Signing up for an account is free, and will automatically create the API Key we need to use the weather modules included with MagicMirror².

After signing up for OpenWeatherMap an API Key is created by default.

Once we’ve got an API key we can add both the Current Weather and Weather Forecast modules to our config.js file,

{
module: "currentweather",
position: "top_right",
config: {
location: "CITY,COUNTRY",
locationID: "XXXXXX",
appid: "YOUR_API_KEY"
}
},
{
module: "weatherforcast",
position: "top_right",
config: {
location: "CITY,COUNTRY",
locationID: "XXXXXX",
appid: "YOUR_API_KEY"
}
},

Where you should replace YOUR_API_KEY with your OpenWeatherMap API Key, and CITY,COUNTRY with your location. While your locationID string can be found from the OpenWeatherMap City List.

So since I’m based in Exeter in the United Kingdom I need to substitute the following for the location and locationID strings,

{
module: "currentweather",
position: "top_right",
config: {
location: "Exeter, GB",
locationID: "2649808",
appid: "YOUR_API_KEY"
}
},

as well as adding my own personal OpenWeatherMap API Key.

A Working Mirror

If you’ve configured everything correctly restarting the Mirror software you should see something like this,

The MagicMirror² running on with Clock, Weather, Forecast, and Remote modules on screen.

with the clock in the bottom left, Remote Control URL in the bottom right, and then the current and forecast weather in the upper right of our screen. Which, at least for a 30cm×30cm square mirror, actually positions roughly in the bottom middle of our magic mirror. At least so long as you mounted the display int he lower left as I did here.

Your config file should look something a lot like this,

var config = {
address: "0.0.0.0",
port: 8080,
ipWhitelist: [],
language: "en",
timeFormat: 24,
units: "metric",
modules: [
{
module: "clock",
position: "bottom_left"
},
{
module: "MMM-Remote-Control",
position: "bottom_right"
},
{
module: "currentweather",
position: "top_right",
config: {
location: "Exeter, GB",
locationID: "2649808",
appid: "YOUR_API_KEY"
}
},
{
module: "weatherforecast",
position: "top_right",
config: {
location: "Exeter, GB",
locationID: "2649808",
appid: "YOUR_API_KEY"
}
}]
};
/*************** DO NOT EDIT THE LINE BELOW ***************/
if (typeof module !== "undefined") {module.exports = config;}

Using AIY Projects SDKs from SSH

Unfortunately now that we have our Magic Mirror running, we can’t get to the AIY Projects dev console. The easiest thing to do here is to SSH into our Raspberry Pi from out laptop and then type the following,

$ cd ~/AIY-voice-kit-python
$ source env/bin/activate

this will configure our SSH session in the same fashion as the dev terminal that we normally open by clicking on the desktop icon. Which means we can now run AIY Projects scripts remotely on the device.

Using the Voice Assistant to Control our Mirror

Probably the easiest way to control the mirror is using Google Voice Assistant as we did for the our retro computing build using the GPO 746 rotary phone.

We can grab module names, which will depend on the load order of the modules in your configuration file when the Magic Mirror starts, using the Remote Control module and the MODULE_DATA action. Go ahead and go to the end point http://192.168.xxx.xxx::8080/remote?action=MODULE_DATA and you should see something a lot like this,

{
"moduleData": [
{
"hidden": false,
"lockStrings": [

],
"name": "clock",
"identifier": "module_0_clock",
"position": "bottom_left",
"config": {
"displayType": "digital",
"timeFormat": 24,
"displaySeconds": true,
"showPeriod": true,
"showPeriodUpper": false,
"clockBold": false,
"showDate": true,
"showWeek": false,
"dateFormat": "dddd, LL",
"analogSize": "200px",
"analogFace": "simple",
"analogPlacement": "bottom",
"analogShowDate": "top",
"secondsColor": "#888888",
"timezone": null
},
"path": "modules\/default\/clock\/"
},
{
"hidden": false,
"lockStrings": [

],
"name": "MMM-Remote-Control",
"identifier": "module_1_MMM-Remote-Control",
"position": "bottom_right",
"config": {

},
"path": "modules\/MMM-Remote-Control\/"
},
{
"hidden": false,
"lockStrings": [

],
"name": "currentweather",
"identifier": "module_2_currentweather",
"position": "top_right",
"config": {
"location": "Exeter, GB",
"locationID": "2649808",
"appid": "YOUR_API_KEY",
"units": "metric",
"updateInterval": 600000,
"animationSpeed": 1000,
"timeFormat": 24,
"showPeriod": true,
"showPeriodUpper": false,
"showWindDirection": true,
"showWindDirectionAsArrow": false,
"useBeaufort": true,
"lang": "en",
"showHumidity": false,
"degreeLabel": false,
"showIndoorTemperature": false,
"showIndoorHumidity": false,
"initialLoadDelay": 0,
"retryDelay": 2500,
"apiVersion": "2.5",
"apiBase": "http:\/\/api.openweathermap.org\/data\/",
"weatherEndpoint": "weather",
"appendLocationNameToHeader": true,
"calendarClass": "calendar",
"onlyTemp": false,
"roundTemp": false,
"iconTable": {
"01d": "wi-day-sunny",
"02d": "wi-day-cloudy",
"03d": "wi-cloudy",
"04d": "wi-cloudy-windy",
"09d": "wi-showers",
"10d": "wi-rain",
"11d": "wi-thunderstorm",
"13d": "wi-snow",
"50d": "wi-fog",
"01n": "wi-night-clear",
"02n": "wi-night-cloudy",
"03n": "wi-night-cloudy",
"04n": "wi-night-cloudy",
"09n": "wi-night-showers",
"10n": "wi-night-rain",
"11n": "wi-night-thunderstorm",
"13n": "wi-night-snow",
"50n": "wi-night-alt-cloudy-windy"
}
},
"path": "modules\/default\/currentweather\/"
},
{
"hidden": false,
"lockStrings": [

],
"name": "weatherforecast",
"identifier": "module_3_weatherforecast",
"position": "top_right",
"config": {
"location": "Exeter, GB",
"locationID": "2649808",
"appid": "YOUR_API_KEY",
"units": "metric",
"maxNumberOfDays": 56,
"showRainAmount": false,
"updateInterval": 600000,
"animationSpeed": 1000,
"timeFormat": 24,
"lang": "en",
"fade": true,
"fadePoint": 0.25,
"colored": false,
"scale": false,
"initialLoadDelay": 2500,
"retryDelay": 2500,
"apiVersion": "2.5",
"apiBase": "http:\/\/api.openweathermap.org\/data\/",
"forecastEndpoint": "forecast",
"appendLocationNameToHeader": true,
"calendarClass": "calendar",
"roundTemp": false,
"iconTable": {
"01d": "wi-day-sunny",
"02d": "wi-day-cloudy",
"03d": "wi-cloudy",
"04d": "wi-cloudy-windy",
"09d": "wi-showers",
"10d": "wi-rain",
"11d": "wi-thunderstorm",
"13d": "wi-snow",
"50d": "wi-fog",
"01n": "wi-night-clear",
"02n": "wi-night-cloudy",
"03n": "wi-night-cloudy",
"04n": "wi-night-cloudy",
"09n": "wi-night-showers",
"10n": "wi-night-rain",
"11n": "wi-night-thunderstorm",
"13n": "wi-night-snow",
"50n": "wi-night-alt-cloudy-windy"
}
},
"path": "modules\/default\/weatherforecast\/"
}
],
"brightness": 100,
"settingsVersion": 2
}

The information we’re after here is the module identifier tag which we can use along with the Remote Control module’s SHOW and HIDE actions to change how our modules are displayed.

Here we have a simple script that uses the Voice Assistant API to show us our local weather information when we ask about it. Remember to make sure the Magic Mirror is running before SSH and running this script other wise the HTTP GET requests to the mirror’s Remote Control module will fail.

Unfortunately unlike our previous builds, there isn’t a button or dial to trigger our Voice Assistant and while this build works just fine with the default hotword support in the Voice Assistant it’s not amazingly atmospheric. When talking to a Magic Mirror I don’t really want to say “Ok Google” to ask it to show me the weather.

Using the Cloud Speech API to Control our Mirror

Which is why we enabled the Google Cloud Speech API for this project earlier in the build. Unlike the Voice Assistant API, which has a hardwired hotword that is processed on the device, we can use the Cloud Speech API to send all our speech to the cloud and—thanks to a recent SDK update—listen for a custom hotword.

However also unlike Google Voice Assistant which is a free service, the Cloud Speech API priced monthly based on the amount of audio processed by the service, measured in increments rounded up to 15 seconds. The first 60 minutes of requests a month are free with further requests costing $0.006 for each 15 seconds.

Having run my mirror for a few days with network latency and other real world factors the cost is roughly $0.017 per minute, which works out at just under $25 a day. Because of course using the API like this means that the mirror is listening for the hotword all the time.

The Cloud Speech API is really meant to be used with some sort of other trigger, like a the button that is normally connected to the Voice HAT.

#!/usr/bin/env python3import aiy.audio
import aiy.cloudspeech
import aiy.voicehat
def main():
recognizer = aiy.cloudspeech.get_recognizer()
recognizer.expect_phrase('repeat after me')
button = aiy.voicehat.get_button()
aiy.audio.get_recorder().start()
while True:
print('Press the button and speak')
button.wait_for_press()
print('Listening...')
text = recognizer.recognize()
if text is None:
print('Sorry, I did not hear you.')
else:
print('You said "', text, '"')
if 'repeat after me' in text:
to_repeat = text.replace('repeat after me', '', 1)
aiy.audio.say(to_repeat)
if __name__ == '__main__':
main()

Using the Cloud Speech API to listen for a hotword isn’t really how it was meant to be used. It means that everything you say near the mirror needs to be sent to the cloud and processed to see if you’ve set the hotword, or phrase. For most people this will be a massive privacy concern. But more on this later, because we can fix it so things work a bit more reasonably.

For now however everything works as you’d expect. Our slightly more thematic hotword, or rather hot-‘phrase’, of “Magic Mirror on the Wall…” is recognised as you’d expect.

The final build.

The final script for the build is really very simple, although to make use of it you need to make sure you’ve got the absolute latest version of the AIY Voice Kit software by pulling from the project’s Git repo.

$ cd AIY-voice-kit-python/
$ git pull

Then you can just copy the script using the Cloud Speech API into AIY-voice-kit-python/src and run it as normal.

If you want to replicate the build you can grab the mono WAV file I used for the ‘tinkle’ as the weather is displayed from Dropbox.

Controlling the Screen Backlight

One of the things I found after using the mirror for a while is that the screen backlight makes a lot of difference to how seamless the mirror appears.

When in a brightly lit room you need to crank the screen back light up to maximum, whilst in a more ambiently lit room it’s best to leave the back light set much dimmer. You can control the brightness of the back light directly from the command line,

$ sudo echo 255 > /sys/class/backlight/rpi_backlight/brightness

it has a maximum brightness of 255.

However if you’re permanently installing your mirror you might want to take a look at the rpi-backlight Python package which will let you dynamically control the backlight from your Python script. You can raise the level of the backlight when the mirror is triggered by the hotword, and then drop it again to something much lower and not as obvious when the information is removed from the screen. I’ve found this helps enormously with immersion, however the precise levels really heavily depend on your local lighting.

(Re-)Securing the Mirror

In the MagicMirror/config/config.js file we opened up network access to the mirror so we could easily test and configure it. We need to close those holes up. Since we’re now controlling the mirror entirely from localhost using our Python we can edit the config file to reflect that,

var config = {address: “localhost”,
port: 8080,
ipWhitelist: [“127.0.0.1”, “::ffff:127.0.0.1”, “::1”],
.
.
.
};
/*************** DO NOT EDIT THE LINE BELOW ***************/
if (typeof module !== "undefined") {module.exports = config;}

and restrict access so that only connections from the Raspberry Pi on the loopback interface can take control the mirror.

Adding more Modules

Even though I’m British, I’m not just restricted to just talking about the weather. There’s a large number of third party modules available for the MagicMirror² platform, ranging from modules to displaying news and currency exchange rates and stock prices, to monitoring your internet speed—something I’ve been rather concerned about in the past—and even with our small screen there are plenty of positioning regions available to display additional modules.

The Two Elephants in the Room

However there are two huge elephants in the room with out current approach to the magic mirror.

Firstly, money. The Cloud Speech API is not free. You can use the online calculator to work out roughly how much you’re going to spend running you project, and keep tabs on both your API usage, using the project’s API Dashboard, and current billing costs, using the Billing Console. You can see the cost breakdown on a per-project basis by clicking through on each project listed in the “Projects linked to this billing account” tab near the bottom of the console. But at around $25 a day running the mirror all the time probably isn’t sustainable.

The API Dashboard showing the API usage over the last few days as I put the project together.

But for most people the second elephant is a lot larger and a lot more scary, and that’s privacy. Ignoring the cost, streaming everything in the room to the cloud all the time and searching through it for a hotword or phrase is probably going to be unacceptable to most people.

However in the next post in this series we’ll take a look at replacing the rather expensive Cloud Speech API with something else, because we can actually use TensorFlow on our device—the Raspberry Pi—to look for and recognise a custom hotword entirely locally without talking to the cloud at all.

We can use our on device in the same way as we’re currently using our hotword, or in the past the Voice HAT button, but our calls to the Cloud Speech API will be restricted just to the bits and pieces of speech immediately following the hotword being spoken.

That’s going to hugely reduce our costs, and protect our privacy. It’ll also solve the baby elephant problem, and let those of you inside the EU that didn’t sign up for the Google Cloud Platform as a business use custom hot words.

Now on Shelves

The new kits are being produced by Google, and arrived onto shelves at Micro Center at the weekend. The AIY Voice Kit is priced at $25 on its own, but you can pick one up for free if you order a Raspberry Pi 3 at $35 for in-store pickup from Micro Center.

The completed mirror and other Google AIY Project Voice Kit projects.

The kit is also available through resellers like Adafruit, and SeeedStudio, and will be available in the United Kingdom through Pimoroni, costing £25.

If you like this build I’ve also written other posts on building a retro-rotary phone Voice Assistant with the Raspberry Pi and the AIY Projects Voice Kit, and a face-tracking cyborg dinosaur called “Do-you-think-he-saurs” with the Raspberry Pi and the AIY Projects Vision Kit.

This post was sponsored by Google.

--

--