Automated regression testing of Chromecast using Cucumber, JRuby and Rukuli

As you may know, i work as a Developer In Test for Media Playout at the BBC. We enable audio and video playout on the web (desktop, tablet and mobile) using our Standard Media Player.

We recently added support for Chromecast, and i want to write a bit about how i automated testing for it.

Chromecast HDMI dongle

Chromecast HDMI dongle

Chromecast is a HDMI dongle that plugs into a normal television and adds connected TV functionality. The neat thing is you can control it from your laptop, tablet or mobile using the media apps that you already know, such as YouTube, Netflix and now BBC iPlayer. Google also provides an API so that new Chromecast apps can be written all the time.

So how do you test a Chromecast? Well, personally i like to test things from an user’s point of view, actually clicking things and observing what happens as a result. But there’s only so far you can take this. I don’t want to rely on a television having a Chromecast being plugged in all the time and on the same network so that i can talk to it.

Though i’ve got to say .. it would be quite awesome to use a television 2 metres wide for my automated regression testing! :p

About to start casting

About to start casting

I didn’t need to test the Chromecast receiver app; that had already been written and tested by the guys in Salford for use by the iOS iPlayer app. I realised, what we actually care about is the communication between our player and the Chromecast. And there is a well documented Chromecast API. So with the help of colleagues Tim Hewitt and Wyell Hanna, i set about creating a fake Chromecast. Pretty much a bit of javascript that would respond in the same way as a real Chromecast, so that we could test how our player would respond to it.

And now i present to you .. the fake Chromecast!

https://gist.github.com/sermoa/10988494

A lot of it is empty methods that are neccessary to the API. But there are some neat tricks here too. I’ll talk you through a few of the more interesting bits.

Firstly we need to load the fake Chromecast into the page. We can achieve this with a @chromecast tag and a Before hook.

Before('@chromecast') do
  $chromecast = Chromecast.new
end

My Chromecast class has an initialize method that inserts the javascript into the page.

class Chromecast

  def initialize
    $capybara.execute_script <<-EOF
      var el=document.createElement("script");
      el.type="text/javascript";
      el.src = "https://gist.githubusercontent.com/sermoa/10988494/raw/82e08c5a29b5689b5e9f3d03c191b8c981102d85/fakeChromecast.js";
      document.getElementsByTagName("head")[0].appendChild(el);
    EOF
    @media_alive = true
  end

end

With this in place, so long as i set up our player with the relevant business logic that says casting is available, when i hover over the player, i get a cast button!

  @chromecast
  Scenario: Chromecast button appears
    Given Chromecast is available
    When I play media
    Then I should see the Chromecast button
A cast button appears!

A cast button appears!

So how does that work? I assure you, i don’t have a Chromecast on the network right now. I’m not using the Google Chrome extension. I’m actually running it in Firefox! What is this voodoo?!!

Have a look at the fakeChromecast.js file:

window.chrome = {};
window.chrome.cast = {};
window.chrome.cast.isAvailable = true;

See that? We’ve set up a Chromecast and said it’s available! Sneaky, hey? It’s that easy, folks! :)

The Standard Media Player will now attempt to set up a session request with its application ID. That’s fine: we’ll happily enable it to do its thing.

window.chrome.cast.SessionRequest = function(a) { }

Obviously a real Chromecast does something rather significant here. But we don’t have to care. This is a fake Chromecast. We only provide the function so that it doesn’t break.

Next, the media player makes a new ApiConfig. It’ll pass the session request it just obtained (in our case it’s null, but that doesn’t matter), and two callbacks, the second being a receiver listener. That’s the important one. We want to convince it that a Chromecast is available, so we trigger this callback with the special string “available”.

window.chrome.cast.ApiConfig = function(a, b, c) {
  c("available");
}

So Chromecast is available. Now suppose the user clicks the button to begin casting. This should request a session.

  @chromecast
  Scenario: Click Chromecast button and connect to a session
    Given I am playing media
    When I click the Chromecast button
    Then I should see Chromecast is connecting
Connecting to Chromecast

Connecting to Chromecast

How did we do this? Easy! The media player requests a session, sending a success callback. Fake Chromecast store a references to that callback – on the window so that we can trigger it any time we like! The callback function is expected to provide a session. We send it a reference to a fake session, which is entirely within our control. Oh it’s all so much fun!

window.chrome.cast.requestSession = function(a, b) {
  window.triggerConnectingToCC = a;
  window.triggerConnectingToCC(window.fakeSession);
}

As the documentation says, “The Session object also has a Receiver which you can use to display the friendly name of the receiver.” We want to do exactly that. We decided to call our fake Chromecast Dave. Because Dave is a friendly name! :)

window.fakeSession = {};
window.fakeSession.receiver = {friendlyName:"dave", volume:{level:0.7}};

I think i found our app expected the session receiver to supply a volume too, so i added that.

The media player does some shenanigans with media that we don’t need to care about, but when it sends a request to load media it passes its callback to trigger when media is discovered by Chromecast. That’s another important one for us to keep, so we store that one. We wait 1 second for a semi-realistic connection time, and then trigger it, passing a reference to .. fake media, woo!

window.fakeSession.loadMedia = function(a, b, c) {
  window.pretendMediaDiscovered = b;
  setTimeout(function() {
    window.pretendMediaDiscovered(window.fakeMedia);
  }, 1000);
}

And now we are almost there. The last clever trick is the communication of status updates. The media player sends us a callback it wants triggered when there is a media status update. So we store that.

window.fakeMedia.addUpdateListener = function(a) {
  window.updateCCmediaStatus = a;
}

The magic of this is, we stored the references to fakeMedia and fakeSession on the browser’s window object, as well as the callbacks. This means the test script has access to them. Therefore the test script can control the fake Chromecast.

So you want Chromecast to report playing? Make it so!

  @chromecast
  Scenario: Begin casting
    Given I am playing media
    When I click the Chromecast button
    And Chromecast state is "PLAYING"
    Then I should see Chromecast is casting
We are casting!

We are casting!

What does that mean, Chromecast state is “PLAYING”? It’s pretty straightforward now, using the objects and callback functions that the fake Chromecast has set up:

When(/^Chromecast state is "(.*?)"$/) do |state|
  $chromecast.state = state
  $chromecast.update!
end

Those two methods get added to the Chromecast Ruby class:

class Chromecast

  def initialize
    $capybara.execute_script <<-EOF
      var el=document.createElement("script");
      el.type="text/javascript";
      el.src = "https://gist.githubusercontent.com/sermoa/10988494/raw/82e08c5a29b5689b5e9f3d03c191b8c981102d85/fakeChromecast.js";
      document.getElementsByTagName("head")[0].appendChild(el);
    EOF
    @media_alive = true
  end

  def state=(state)
    @media_alive = false if state == 'IDLE'
    $capybara.execute_script "window.fakeMedia.playerState = '#{state}';"
  end

  def update!
    $capybara.execute_script "window.updateCCmediaStatus(#{@media_alive});"
  end

end

Note that i had to do something special for the “IDLE” case because the media status update expects a true/false to indicate whether the media is alive.

So, using these techniques, sending and capturing events and observing reactions, i was able to verify all the two-way communication between the Standard Media Player and Chromecast. I verified playing and pausing, seeking, changing volume, turning subtitles on and off, ending casting, and loading new media.

This fake Chromecast is a lot faster than a real Chromecast, which has to make connections to real media. It’s also possible to put the fake Chromecast into situations that are very difficult to achieve with a real Chromecast. The “IDLE” state, for example, is really hard to achieve. But we need to know what would happen if it occurrs, and by testing it with a fake Chromecast i was able to identify a bug that would likely have gone unnoticed otherwise.

For the curious, here’s a demo of my Chromecast automated regression test in action!

This is all a lot of fun, but there is nothing quite like testing for real with an actual Chromecast plugged into a massive television!

Testing Chromecast for real

Testing Chromecast for real

So that’s how i test Chromecast. If you want to test Chromecast too, please feel free to copy and tweak my fakeChromecast.js script, and ask me questions if you need any help.

A typing tutor for the visually impaired

I have just had a very interesting conversation with Kajarii where we discussed keyboards ergonomics, alternative keyboard layouts, and typing tutors.

Kajarii is blind and touch types on a QWERTY keyboard, having learned by trial and error from a young age. Kajarii would like to learn colemak keyboard layout, but made a really good point: how do you learn a new keyboard layout when you can’t even see what the layout should look like and you have no feedback from what you’ve typed?

Once i thought about it, i realised that every typing tutor i’ve ever seen has relied heavily on being able to see. You have to look at what you’re meant to type, there is usually some kind of layout diagram that highlights the key, and quite often the use of colour for feedback if you have typed correctly, and the cursor moves on to the next letter. They are no good at all for somebody without the privilege of sight.

I think i could quite easily teach a blind person to touch type if we sat down side by side. I’d start with the index fingers on the home row, tell them what the letters are, give instructions of short bursts of letters to type, and give verbal feedback on what they typed. Like most typing tutors do, i’d gradually introduce new letters and form words to type using the letters learned so far. I can see this working well with one-on-one human interaction.

So can that be translated to an accessible typing tutor program? I think it probably could be. I would like to try. What would it be like? A terminal based app? A web app? A native desktop app? Would it rely on screen reader technology, or use pre-recorded instructions? I need the input of visually impaired people to know what would be most useful.

Is there anyone out there who would like to help write an accessible typing tutor? Anybody who would like to help test one?

Could we do an open source effort? Something that is extensible to adding different keyboard layouts? There are accessible typing tutors and there are typing tutors that support colemak, but i don’t think there is an overlap.

Please leave a comment if you are interested in helping to create something, or have some ideas about how it could be.

I guess i will never understand just how privileged i am to be blessed with both sight and hearing, but i am interested to learn more about accessibility issues, and i at least *want* to care more about the kind of difficulties i hope i will never have to face.

QIDO for Colemak please!

As most of my followers are probably well aware, i’m a bit of a nerd on the matter of keyboard ergonomics and keyboard layouts.

For any Dvorak typists, the QIDO (QWERTY In, Dvorak Out) is a very useful device that converts any USB keyboard into a Dvorak one. It is a spin-off product of KeyGhost, a hardware keylogger, but in this case put to a completely different use.

They QIDO plugs between the keyboard cable and the USB port on the computer, and it translates all keyboard input signals into the Dvorak letters. Reminds me somehow of a Babel fish! :) Having a QIDO means you don’t have to switch keyboard input at the operating system, especially useful if you’re pair programming with someone who types on QWERTY.

A few months ago i wrote to KeyGhost, the makers of the QIDO, to ask for a Colemak version (or even better, a programmable version) of the QIDO. Theo Kerdemelidis gave this reply:

We have had a few requests for Colemak support, so we will look into it as soon as we have a chance.

I know they also sent the same response to someone else who asked for a Colemak QIDO at the same time. As far as i’m aware they are still sending out the same reply to people who ask.

So i have a task for you! If you could use a Colemak QIDO (or QICO as they might call it!) please email helpdesk@keyghost.com to let them know your interest. If you get any reply, please let me know here.

Dvorak might be more popular at the moment, but that’s only because it’s been around for a lot longer than Colemak. We know that Colemak has the edge, and it’s getting more popular all the time. Let’s give KeyGhost all the encouragement we can to get a Colemak version of the QIDO made soon! :)

Unless there’s a big solar flare!

I was just listening to The Angry Atheist episode 37 interview with The Godless Bastard and i was highly amused by this exchange.

Somehow they got talking about the internet and our reliance upon it.

– It’s almost like a drug … my world comes to a screeching halt without the internet … we just become so dependent on the technology, i mean, how the hell do we get by without it? That’s what i want to know.

– We don’t have to. We’re never gonna have to. It’s always gonna be there!

– Yep, i know, i know!

– Unless there’s a big solar flare!

– That’s right, that’s right, it’ll take everything out!

I have heard the sun is getting more active at the moment, and solar flares are becoming more common and more powerful. Part of me really wants to see what would happen if a big solar flare takes down all our telephones, television and internet. How will we behave when cut off from the wider world? Would we turn to our local communities for support? I like to hope we would.

I have a few close friends locally, but i always think that i could have many more friends in my neighbourhood if it didn’t just seem so weird to go and introduce myself. I think it might take something drastic like a solar flare to get us off our computers, out of our safe little houses and connect with the people around us.

OS X: So you think you’re password protected?

To quote Bob Marshall: “Security is always relative, never absolute”

When i started contracting, i thought it would be a good idea to make my macbook require a password on booting up or waking up from screensaver. For weeks i’ve been using it fine coming out of screensaver, but today i rebooted. I couldn’t log in. I think it must be something to do with the colemak keyboard layout. I entered the correct password, in colemak and qwerty, but it was having none of it.

Slightly flustered i turned to my phone and searched for “forgot osx password”. Very quickly i found a few articles on how to restart, hold down Cmd + s to get into single user command line mode, and then mount the filesystem for reading and writing.

Without entering a password, you now have superuser access to the whole system. You can reset people’s passwords. You can view and modify files. You can wipe the whole computer if you want to.

All i’m saying is, if you think an account password will protect you, you’re wrong. It may act as a deterrent, but if someone really wants access to your mac, they coud get it in less than 5 minutes.

It’s not just macs either: How To Reset Admin/Root Password gives easy to follow instructions for FreeBSD, Linux, OS X, Solaris and Windows. Ironically, Windows is the hardest one to crack on this point!

It’s a bit of a wake-up call for me.

Computer generated art

I woke up yesterday with a simple idea for generating a picture based on an input string. I don’t exactly know where the idea came from, but i think i’ve been influenced by Nick Huggins, whose abstract work i adore, and also in a way by QR codes. I’m not a particularly artistic person, but i figured i could come up with an algorithm and let the computer do the creative bit for me! :)

So i installed the open source tools ImageMagick and RMagick, learned a bit about the RVG library, and set about trying some ideas. I fiddled and tweaked the algorithm until it seemed to consistently output something that was reasonably pleasant. Here is the picture for my name, and for my twitter id.

aimee @sermoa

Having tried random numbers and obvious inputs like my name, i searched for other input sources. Being interested in community generated content, i wrote a script to fetch the current top twitter trends. Here are the results.

Kim Hee Chul Solomon Burke Steamed Bun #thingsyoushouldntsay HEEBUM
#badsongsinjail Limera1n Aiden #bsr_tousounow One Direction

As you can see, some come out better than others. I’m adding the input string mostly for debugging purposes so that i can see how the image was seeded. When i get one that i like, i can increase the blur and remove the input string. For example, i really like the images produced by “Kim Hee Chul” and “#bsr_tousounow”, so let’s try with a bit more blur.

Kim Hee Chul (with more blur) #bsr_tousounow

Nice, hey?! Not sure they’re ready for a gallery just yet, but certainly an interesting experiment.

Everything in the picture is generated from the input string: the size, colour, number of boxes, box sizes, opacity, border style. It is extremely unlikely that any two input strings would ever generate the same picture. However, the algorithm is not random. Given an input string, you’ll always get the same picture, though i may choose to do some post-processing on it (such as blur, frame, lighten or darken).

In the interests of sharing knowledge here is the main structure of my generator, but the really creative part is in how it comes up with the numbers, which is going to remain a secret, sorry!

I am willing to generate images for anybody who asks nicely! :)

The Wessex Wyvern as SVG

I needed to find a good quality image of a Wyvern, the symbol of Wessex, ancient kingdom of the West Saxons. The best i could find was the image on the Wessex Flag, below, but it is poor quality and can’t be scaled up. So i had to learn a bit about path tracing and scalable vector graphics.

Wessex Wyvern (low quality)

Wessex Flag by Chrys Fear, found at fotw.net/flags/gb-wessx.html

I used a combination of the free open source graphics tools GIMP, Inkscape and Potrace to trace the shape into vector format. Even at the same size, it’s already much better, saved here in PNG format:

Wessex Wyvern (higher quality)

The SVG is infinitely scalable and you can download it here: wessex-wyvern.svg

Here you can see the quality improvement:

Before and after - close up

Here is a close up of the eye, which took a bit of work to get the paths just right:

Close up of the Wyvern's eye

I needed the Wyvern to form part of the Bi Wessex flag. Here is what we are affectionately calling the Bivern!

The Bivern (Bi Wessex Wyvern)

The Bi Wessex flag is also available as SVG, should you want it: bi-wessex-flag.svg

If you need to improve the quality of an image, i hope you will feel encouraged that it is possible, with a little bit of effort and experimentation.

A simple backup strategy

Today i scanned several of my university lecture notes into PDF format. The ScanSnap document scanner makes this a very fast and easy process, and it includes text recognition. This feels good: i can save physical space by throwing away my notes, but still have them usefully available to me, in searchable format! yay!

Now that i’ve scanned these, i want to be sure that i don’t lose them. I’ve never been much of a person for backups, to be honest. My idea of a backup is something i do just before i upgrade Linux! But i’ve started to think i’d like to get into at least semi-regular backing up.

With that in mind, i came across this article: What’s Your Backup Strategy? by Jeff Atwood. The proposed solution works on Linux! Funny, i always assumed rsync was a ruby library: turns out it’s a straightforward command line tool.

sudo rsync -vaxE --delete --ignore-errors /home/aimee /media/FREECOM\ HDD/

That was enough to get me a first backup onto an external hard drive. Now it’s just a case of running that periodically to keep it up to date.

I’m not particularly interested in having a cron job because my computer isn’t always on, and the external drive isn’t always plugged in. So i just made myself a simple executable file to sit on the desktop and remind me to click it and synchronise the backup every so often.

#!/bin/bash

source=/home/aimee
target=/media/FREECOM\ HDD/

echo Backing up $source to $target
read -p "Press enter to begin."
sudo rsync -vaxE --delete --ignore-errors "$source" "$target"
read -p "Press enter to close."

See, i said it was simple! But a simple solution is better than no backup solution at all, right? :) Now that i’ve started with something i can tweak it as i find necessary.

By the way, i love the quote of Jeff’s in that article: The universe tends toward maximum irony. Don’t push it.

Packard Bell Mustek Bearpaw scanner on Ubuntu/Mint

I am quite sure that nobody cares about this except for me, unless they happen to have a similar scanner to mine. I’ve had to do this process about 5 times now on different installs. I can guarantee that it works for Fedora, Debian, Ubuntu and Mint. I thought i’d share it because i’ll probably need it again and someone else might find it helpful.

First you need xsane to be able to scan things at all.

sudo apt-get install xsane

Plug in your scanner by USB. Attempt to scan by typing scanimage. It won’t work, but you need to see the error message.

scanimage
[gt68xx] Couldn't open firmware file (`/usr/share/sane/gt68xx/PS1Dfw.usb'): No such file or directory
scanimage: open of device gt68xx:libusb:004:002 failed: Invalid argument

See that PS1Dfw.usb? You need to get that file from http://meier-geinitz.de/sane/gt68xx-backend/ but be aware that your computer might ask for a different file such as ps1fw.usb or ps1fw.usb. Whichever it is, find it on the page and click it to download.

Assuming it’s gone into your Downloads folder, move it to the right place.

sudo mv ~/Downloads/PS1Dfw.usb /usr/share/sane/gt68xx

Now try the scanimage command again. With any luck your scanner will burst into life and a whole load of crazy gobbledegook will splurge into your terminal window. This is the picture your scanner is seeing, trying to be displayed as text! Don’t be afraid to Ctrl-C to stop it once you see it working. Or you can just wait for it to finish.

You can also do this to ensure that your scanner is configured correctly:

scanimage -L
device `gt68xx:libusb:004:002' is a Mustek Bearpaw 1200 CU Plus flatbed scanner

Now to actually scan something! Open up The Gimp and click File -> Create -> XSane -> gt68xx:libusb:004:002

It comes up with this super ugly XSane interface, where you can make a preview, choose the scan area, fiddle with the colour settings and DPI settings, and scan an image.

XSane scanning an image on Linux Mint

When it’s done, it’ll come back to The Gimp ready for you to edit the scanned image.

Protip: If you lose one or more of the XSane windows, you can get them back again by going to the Window menu of XSane and ticking on the ones you need.

A quick way to resize images in Linux

Quite often i find myself wanting to resize a whole directory of images. Rather than opening them all in the gimp, i do it through the command line. I seem to do this often enough that it’s worth recording here, for my own reference as well as for anybody else who would find it useful.

First of all, you need imagemagick.

sudo apt-get install imagemagick

Change directory to where the images are and create a subdirectory for the resized versions.

cd ~/Photos/2010/04/05
mkdir resized

My phone likes to put spaces in file names which really confuses things, so i convert them to underscores. You can skip this step if your filenames contain no spaces.

for image in *.jpg; do mv "$image" `echo $image | tr ' ' '_'`; done

Now for the clever bit: i run convert command on the image files, resizing them to 1024px and saving with a quality of 80%.

for image in *.jpg; do convert $image -resize 1024 -quality 80 resized/$image; done

Lovely wonderful linux! :D

One thing to note: it always resizes along the top edge, which may not be the longest edge. If you have a portrait file which is 1536×2048 it comes out at 1024×1365 (not 768×1024 as you might have expected).

The resize option can take a percentage, so if you know all your images are the same size then you could just send a 50% to reduce to half size.

for image in *.jpg; do convert $image -resize 50% -quality 80 resized/$image; done

Imagemagick is super-incredible-awesome so there probably is a way to deal with differently sized images at different orientations. If anybody knows, please add it in the comments! :)