Best 3.5mm to USB Type C adapters for microphone input

After switching to a reconditioned Galaxy S21 Ultra which has no 3.5mm headphone jack, the prospect of moving to Bluetooth earbuds didn’t phase me; they’re cheap enough & the quality is pretty good.

However, I wanted to use my Boya BY-M1 lav mic (with 3.5mm jack) to record videos for YouTube, and therefore needed a 3.5mm to USB Type-C adapter. After rooting around on Amazon I found a couple with decent reviews; one at £8, the other £18.

They might be fine for listening, but both were a absolute disaster for recording audio! The cheaper one had a poor frequency range, so it sounded like I was talking through a cushion. The other had a good frequency range (similar to the internal mic), but a high amount of electrical noise.

Since then, I’ve tested a couple more adapters and have put together a series of video reviews to show you which ones to avoid, and which ones I’d recommend you buy.

Displaying a Replicon Time-Off Calendar in SharePoint (or wherever!)

Our team just started using Replicon to record our time off, but wanted to have a nice calendar widget on our SharePoint page (or wherever) rather than having to log into Replicon each time.

At first I tried their API, but struggled with permissions and getting any time-off data out of it.

However, I did find a more straight forward way.. through the iCalendar feed! I’m posting my solution here for anyone else who might want to do this with the minimum of effort (but it does need some effort! ;).

First off, you need to proxy the iCal data feed through something due to web (CORS) security.. so I’m doing that with a very simple Azure Logic App that grabs the feed & responds with the data.

Proxying the iCalendar Feed

This can then be consumed by our calendar JavaScript code, which takes the iCalendar data, reformats it to JSON, then tidies it to feed into a free JavaScript calendar called FullCalendar.

The full source for this is in a CodePen, which makes it easy for you to try this out yourself. https://codepen.io/mattc_uk/pen/bGYLBpP?editors=0010

(you’ll need to edit the URL in the code to point at your iCalendar data proxy).

Once you’re happy with the code (in CodePen) you’ll need to host it on a webserver, then you can use the Embed component in SharePoint to get it onto your site page. To do this, I actually used another quick Azure Logic App, since it means I don’t need to have an actual web server.

I pasted all the code from the CodePen into a Response blog & boom, we have an easily-embedded calendar containing our team’s holidays!

Hosting the code in a Logic App

Hopefully this will help anyone else trying to do something similar, so you don’t have to start from scratch.

Arduino IR Code Translator – Unsupported Device Control with a Fire TV Remote

[Update: As featured on Hackaday! :) ]

The Fire TV remote has a neat feature which lets you control other equipment like the TV or a soundbar. The list of manufacturers & devices it’ll control is massive.. but not all encompassing!

I bought an Edifier R1855DB (very similar to the R1850DB), and wanted to control it using the Fire TV remote.. which saves juggling 2 remotes (and the Edifier one is pretty cheap & feels like it could break at any moment!).

I soon found out that this particular Edifier model isn’t in the list of supported devices in the Fire TV equipment control feature which is really frustrating. There’s no route to get added to the Amazon list, nor does it have a ‘learn’ mode.

When I contacted Edifier they said “Our remotes use custom IR codes that can not be inputted into a universal remote”. And “information about the IR code is not open to the public”. Pretty ridiculous.

This is where I got curious about what IR codes the Edifier remote is sending out.. so I took a cheap Arduino Nano microcontroller and an IR receiver & wired them up on a breadboard. There’s an IR library with a bunch of examples, so I used the ‘IR dump’ code, uploaded it to the board & could instantly see the codes from the remote.

Capturing IR Codes

Apparently the Edifier R1855DB uses the NEC protocol, which is really common, and you can see the sequences it sends easily enough. I noted down the IR codes for each of the buttons I was interested in, and can now look at transmitting them!

You can get an IR transmitter for the Arduino .. at first I tried an IR LED from an old TV remote, but it needed a transistor to make it work properly, which I didn’t have. So I ended up buying a KY-005 IR module from eBay for £2.50.

KY-005 IR Module attached to the Arduino Nano

Hooking this up to the Arduino was simple, and I tested it was working using more example code from the IR Remote library. Finding the right format to play back the NEC codes I captured was a bit trial-and-error.. I eventually found the raw codes worked best. I actually used a Raspberry Pi with another IR receiver to validate the codes being sent were identical to the original remote.

What I can then do is have an IR receiver AND transmitter hooked up to the same Arduino and have it act as a translator between one manufacturers codes to another.

We can power the Arduino using the USB socket on the TV. This only powers up when the TV is on, which is quite handy.. and if it turns on, we can write some code to send the speaker power on IR command when the Arduino boots.

Here’s a quick wiring diagram, showing how everything is connected together;

Wiring Diagram

I didn’t have a cable long enough, so I took an old USB cable, cut it up, and soldered it into the IR transmitter so that I could place it near the Edifier IR receiver.. that was the most fiddly bit of this project.

Once I had that set up, I picked a supported device from the Fire TV equipment list – the Amazon Basics Soundbar – and dumped the IR codes for it from the Fire TV remote.

I then wrote some code to listen for the Amazon Basics Soundbar commands & emit a corresponding Edifier IR code. This worked an absolute treat after a bit of debugging. I even added a sequence detector so you can press Mute x2 then Vol Up to send the speaker power signal in case it somehow gets out of sync. Adapting the code to work with whatever equipment you have should be straightforward..

Altering the Arduino code for your equipment

You can build your own version of this for about £8, and it’s incredibly easy to put together, only requiring some basic electronics and coding skills.

Hope this has been useful.. let me know how you get on if you decide to build one of these yourself.

The GitHub repo for this project has a lot more detail & all the source code you need; https://github.com/mattcuk/IRtranslator

There’s a video of this project on YouTube if you want to see a bit more of the build process and see it working.

DIY IoT Motion Sensor for Remotely Monitoring Elderly Relatives

We have an elderly relative who, in the coming years, we expect have to keep a closer eye on. It’s not like we don’t have almost daily contact with them already, but having an early heads up of any mobility issues would be beneficial & give a degree of peace of mind.

Having a wifi-enabled camera installed is one option, but would be pretty intrusive, and I don’t want to have to watch it to figure out if there are any problems.

What I realised I needed was a Passive Infrared (PIR) movement sensor which could ping a service in the Cloud & alert me for anything out of the ordinary.

There are such solutions sold by a bunch of companies, but cost upwards of £100 and tend to have a subscription charge.

I started to think I’d roll my own solution using a development board and PIR sensor, but I remembered I actually have an old battery-powered PIR which activates a mains adapter (the idea is you’d maybe have a lamp in the socket for home security).

The adapter I have is from a company called Timeguard, but it’s about 15 years old and is obsolete.. they don’t have any current products of this type, but you can find similar ones on Amazon easily enough.

What I like about this, is that I can concentrate on the IoT piece, and let the adapter + sensor do their thing without worrying about the electronics.

All I needed to do next was take a small ESP8266 board I had lying around and code it up to connect to the local wifi & call a URL.. which in my case will be an Azure Logic App. Logic Apps are great.. you can get them up and running very easily, at minimal cost, and without writing any code!

At first, the code for the ESP8266 just used the examples from the Arduino IDE. In very little time I had it connected to Wi-Fi and ready to make a HTTP call to wherever I wanted. However, I soon discovered the examples only worked with non-HTTPS sites (Logic Apps are HTTPS-only).

After a bunch of Googling, I found this library from gojimmypi on GitHub, which allows you to make HTTPS calls.

After switching to that & rewriting portions of the code, the board called the Logic App URL without issues and I received a 202 response (Accepted), and I could see the call in History;

Now we’re cooking! The motion sensor will activate the power adapter, which boots up the ESP8266 board, connects to wifi & calls Azure. The adapter will stay powered for 15s to 15min.. so having it set to 15 minutes means we’re not repeatedly powering the board & calling Azure (and incurring a load of cost).

The Logic App can do whatever we want to record the event.. update a table, or write a small text file with a timestamp.

Now we have that, we can have a secondary Logic App running on a schedule to monitor the events & alert us by email (or whatever) if it falls outside what we typically expect. For example, at 9am check that there’s been motion detected in the last 3 hours. If there hasn’t, send me an email so I can give the relative a quick call to check they’re ok.

With Azure, it’s always good to keep an eye on costs. In this case, lets look at the worst case scenario where we have a very active elderly relative who trips the motion sensor every 15 minutes from 7am to 11pm.. so that’s 16hrs x 4 = 64 possible calls a day.

Logic Apps cost you per Action block, and cost a bit more for Standard + Premium actions. In our case we have 1 trigger block and 1 standard block to write to a Storage Account (table or file).

The Azure cost calculator can then be used to figure out the monthly cost..

That’s a maximum of 21 pence (GBP) a month if it triggers every 15 min (which it won’t).. not bad!

We’ll also have the ‘event monitor’ Logic App, but that won’t run as often.. maybe every 4 hours starting at 9am, finishing at 9pm.. so that’s 4 times a day, with a few more action blocks in it to figure out what to do.. so maybe 10-15 blocks, 5 of which might be ‘Standard’…

As you can see, the monthly costs are minimal, and helps illustrate how useful Azure can be for something like this!

It took a few hours to put this all together & I’m pretty happy with the solution. We can put the battery-powered PIR sensor in somewhere like the kitchen, or hallway and be safe in the knowledge we’d get an alert if there wasn’t the level of activity we’d typically expect to see.

If you’re interested in the ESP8266 source, here it is;

// WIFI SETUP & LOGIC APP URL
char ssid[] = "YOUR_WIFI_SSID";
char pass[] = "YOUR_WIFI_PASSWORD";
char logicAppURL[] = "https://YOUR_LOGIC_APP_URL";
// HTTP AND WIFI
// Needed to go here & install board support for ESP8266. https://github.com/gojimmypi/ESP8266-Arduino
// This gave access to the WiFiClientSecureBearSSL client library (which is needed for HTTPS).
// See install instructions on that GitHub page.
// Also installed h/w support for the TTGO OLED board I have.
#include <ESP8266WiFi.h>
#include <ESP8266WiFiMulti.h>
#include <ESP8266HTTPClient.h>
#include <WiFiClientSecureBearSSL.h>
// OLED INCLUDES
#include <Arduino.h>
#include <U8g2lib.h> // make sure to add U8g2 library and restart Arduino IDE
#include <SPI.h>
#include <Wire.h>
#define OLED_SDA 2
#define OLED_SCL 14
#define OLED_RST 4
U8G2_SSD1306_128X32_UNIVISION_F_SW_I2C u8g2(U8G2_R0, OLED_SCL, OLED_SDA , OLED_RST);
ESP8266WiFiMulti WiFiMulti;
void setup()
{
Serial.begin(115200);
Serial.println();
oledInit();
oledPrint("Start…", false);
delay(500);
oledPrint("Wifi connect ..", false);
delay(500);
Serial.print("Connecting to ");
Serial.println(ssid);
WiFiMulti.addAP(ssid, pass);
String progress = ".";
while (WiFiMulti.run() != WL_CONNECTED) {
delay(500);
Serial.print(progress);
oledPrint(progress, false);
progress = progress + ".";
}
oledPrint("Wifi connected", false);
delay(1000);
Serial.println("Calling HTTP");
httpGET(logicAppURL);
}
void loop()
{
delay(10000);
}
void httpGET(String url) {
std::unique_ptr<BearSSL::WiFiClientSecure>client(new BearSSL::WiFiClientSecure);
client->setInsecure();
HTTPClient https;
Serial.print("[HTTPS] begin…\n");
oledPrint("[HTTPS] begin", false);
if (https.begin(*client, url)) { // HTTPS
Serial.print("[HTTPS] GET…\n");
oledPrint("[HTTPS] GET…", false);
// start connection and send HTTP header
int httpCode = https.GET();
// httpCode will be negative on error
if (httpCode > 0) {
// HTTP header has been send and Server response header has been handled
Serial.printf("[HTTPS] GET… code: %d\n", httpCode);
oledPrint("[HTTPS] GET " + String(httpCode), false);
// file found at server
if (httpCode == HTTP_CODE_OK || httpCode == HTTP_CODE_MOVED_PERMANENTLY) {
String payload = https.getString();
Serial.println(payload);
}
} else {
Serial.printf("[HTTPS] GET… failed, error: %s\n", https.errorToString(httpCode).c_str());
oledPrint("[HTTPS] GET " + String(https.errorToString(httpCode)), false);
}
https.end();
} else {
Serial.printf("[HTTPS] Unable to connect\n");
oledPrint("[HTTPS] GET Err", false);
}
}
void oledInit() {
Serial.println("OLED Start..");
u8g2.begin();
u8g2.setFont(u8g2_font_6x10_tf);
}
void oledPrint(String message, bool frame) {
char charBuf[15];
message.toCharArray(charBuf, 25);
u8g2.clearBuffer();
u8g2.drawStr(10, 25, charBuf);
if (frame) u8g2.drawRFrame(0,0,128,32,4); // https://github.com/olikraus/u8g2/wiki/u8g2reference#drawrbox
u8g2.sendBuffer();
}

Fujifilm X-E3 Accessories, Tips, and Sample Videos

One of the pieces of tech I bought this year was a mirrorless APS-C digital camera which replaces an ageing Nikon D5000. I wanted something super-compact, but with interchangeable lenses & the ability to record 4K video.

Since were so many options, I started researching cameras & ended up creating a spreadsheet to record all the things I was interested in :)

Long story short, I chose the Fujifilm X-E3 due to the price-vs-features, and 8 months later I’m still happy with my choice; it produces some brilliant images, and is very portable for taking with me & the family.

Video is pretty decent from the camera, as long as you choose the right resolution & FPS. I’ve uploaded samples from the camera into a YouTube playlist for anyone who is interested it how the different modes look.

There are a few things that could be improved; adding 60 fps for 4K video, longer recording times (it’s limited to 10min at 4K, 15 min for 1080p, which is low these days), a dedicated ‘Record’ button (rather than having to dip into the Drive Mode menu to switch from Photo to Video), and a larger flippy screen on the back. Let’s see what the X-E4 brings in 2021!

Camera Bag

Next, if you’re looking for a really compact bag, I really like the Think Tank Mirrorless Mover 10, which is the smallest I could find that fit the camera + the 18-55mm lens. It’s also pretty cheap at about £20-30 (watch out for deals!).

Extra Batteries

There are plenty of cheap 3rd party batteries available; the ones I chose are from Baxxtar and I’ve had no issues with them at all.. performance is just as good as the one that came with the camera.

Companion App

If you get any newer Fujifilm camera, there’s a companion app available which allows you to connect your phone to the camera & do automatic synchronisation of the photos. It does this after you turn the camera off, and sets up an ad-hoc Wifi access point to do the transfer (Bluetooth is too slow). It’s pretty handy, since your photos all end up on your phone, which can then sync with Google Photos when you’re back at home, making the whole process quite seamless.

However, if you use this, one thing you’ll want to do is enable full-resolution files, since by default it downsizes everything to keep the transfer fast. Here’s how to do that;

Synchronise Videos to Your Phone

The Fujifilm app won’t transfer videos, so you’ll need to use some other apps to sync them automatically to your phone using a USB OTG adaptor & SD card reader. One of the apps is called FolderSync, and the other is Automate. Here’s how I use them to automate the process;

That’s it, I hope these tips have helped anyone with an X-E3, or anyone doing their research on what camera to buy.

How to Automate PageSpeed Insights for Multiple URLs on a Schedule using Logic Apps or Flow

For the website I’m responsible for, I was interested in capturing the data from the Google PageSpeed Insights tool, and having the data recorded somewhere on a schedule. There’s a blog post on Moz.com that talked about doing this with a Google Sheet, but it wasn’t quite what I was after; I wanted the data to be collected more regularly.

Instead of using Google Sheets (and a fair amount of code), I decided to use an Azure Logic App (you can use this or Microsoft Flow), which is part of Microsoft’s Cloud platform.

The Logic App is run on a Recurrence trigger which I set to every 6 hours. By collecting the results automatically over time, you’ll see how the changes you’re making to your site affect your PageSpeed scores.

recurrence-hr

The first step simply defines the URLs you want to check, then it’ll loop over each one & call the PageSpeed API. Go get an API key, and make sure PageSpeed API is enabled.

Results from the API call are parsed out and pushed into a new row in an Excel Online sheet.

If you’re interested in setting this up yourself, I recorded a short video which shows how it works in more detail.

There are a few foibles in Logic Apps which caught me out, first, getting the list of URLs into an Array didn’t work as expected. I had to switch to Code View to correct the escaping of the return character to read;

@split(variables('urlList'), '\n')

The JSON payload from the PageSpeed API is pretty large, so I’ve listed the path to the elements you’ll be interested in below. I’m using split (on space) purely to get at the numerical value, which is more useful in the spreadsheet;

First Contentful Paint

@{split(body('HTTP')?['lighthouseResult']?['audits']?['first-contentful-paint']?['displayValue'], ' ')[0]}

First Meaningful Paint

@{split(body('HTTP')?['lighthouseResult']?['audits']?['first-meaningful-paint']?['displayValue'], ' ')[0]}

Speed Index

@{split(body('HTTP')?['lighthouseResult']?['audits']?['speed-index']?['displayValue'], ' ')[0]}

Time To Interactive

@{split(body('HTTP')['lighthouseResult']['audits']['interactive']['displayValue'], ' ')[0]}

Time to First Byte

@{split(body('HTTP')?['lighthouseResult']?['audits']?['time-to-first-byte']?['displayValue'], ' ')[3]}

Overall, this was quite easy to put together and shows the power of Azure Logic Apps. Being able to do this without any code or (your own) servers, and getting things live in a couple of hours is a fantastic tool to have at your disposal.

Make your own £5 ambient TV backlight

After clearing out some junk, which included an old halogen desk lamp, I was thinking about putting in an LED light behind the PC monitor.

Then I remembered I’d bought a ring of 24 RGB LEDs from Aliexpress last year & hadn’t used it in a project.

I also had a spare Arduino Nano, and all the things I’d need to allow me to hook up a dial (potentiometer) for the light level, and button to cycle through different colour modes.

Here’s a quick video of it in action..

Parts

LED Ring – £2
Arduino Nano – £1.75
Breadboard & bits – £1.25

Wiring it up

It’s an easy one to wire up.. I took a few basic examples and mashed them together to get what I wanted from the design.

I’m not an electronics expert, and approached this like I approach software development; write it in manageable/testable chunks, which I can implement and test individually, then bolt it all together.

fritz-led-ring

The code for the project is pretty simple.. I think the most complicated bit is handling the button, which needed debounce functionality.


#include <Adafruit_NeoPixel.h>
#ifdef __AVR__
#include <avr/power.h>
#endif
#define PIN 6 // pin on the Arduino is connected to the LED ring
#define NUMPIXELS 24 // Number of pixels on the LED ring
#define POT_PIN 0 // Potentiometer pin
#define BUTTON_PIN 2 // Button pin
Adafruit_NeoPixel pixels = Adafruit_NeoPixel(NUMPIXELS, PIN, NEO_GRB + NEO_KHZ800);
int showType = 0;
bool oldState = HIGH;
void setup() {
pinMode(BUTTON_PIN, INPUT_PULLUP); // Declare pushbutton as input
pixels.begin(); // This initializes the NeoPixel library.
}
void loop() {
// Read the potentiometer value and translate to how many pixels we want to illuminate
int value = analogRead(POT_PIN);
value = map(value, 0, 1023, 0, 25);
// Switch colours if the button is pressed
bool newState = digitalRead(BUTTON_PIN);
if (newState == LOW && oldState == HIGH) {
delay(20); // Short delay to debounce button.
// Check if button is still low after debounce.
newState = digitalRead(BUTTON_PIN);
if (newState == LOW) {
// Cycle through different colour schemes
showType++;
if (showType > 8) showType=0;
}
}
oldState = newState; // Set the last button state to the old state.
uint32_t color = pixels.Color(255,255,255); // default to white when first booted
if (showType==1) color = pixels.Color(0,0,255); // blue
if (showType==2) color = pixels.Color(0,255,0); // green
if (showType==3) color = pixels.Color(255,0,0); // red
if (showType==4) color = pixels.Color(0,127,255);
if (showType==5) color = pixels.Color(255,127,0);
if (showType==6) color = pixels.Color(255,0,127);
if (showType==7) color = pixels.Color(0,255,255);
if (showType==8) color = pixels.Color(127,127,255);
// Illuminate X pixels depending on how far the potentiometer is turned
for(int i=0;i<NUMPIXELS;i++){
if (i<value) {
pixels.setPixelColor(i, color);
} else {
pixels.setPixelColor(i, pixels.Color(0,0,0)); // Don't show anything
}
}
pixels.show(); // This sends the updated pixel configuration to the hardware.
}

And here are a few pictures of it in place behind the PC monitor..

Samsung Q9FN Tips, Tricks, Secrets & Problems

The 2018 flagship TV from Samsung is the Q9FN (Amazon link: https://amzn.to/2IvwmkO). This is a FALD (Full Array Local Dimming) display with 480 LEDs lighting the display, rather than being edge-lit like a lot of the models. This generally means it’s much more capable of providing good contrast ratios. Overall I think it’s a good TV … most of the time….

Problems

However, the reality is different from the headlines and reviews in the major publications. What a lot of people have found is that the Contract Enhancement & Local Dimming features can cause light fluctuation problems, which are especially apparent in dark scenes with subtitles.

Light Fluctuations

You can see for yourself in this clip from Narcos Mexico S01E05 at about 45 minutes. This is being viewed via the built-in Netflix player in HDR, with Contrast Enhancement turned on, since without it enabled, dark scenes are waaay too dark to see anything!

It shows how subtitles affect the light levels in other areas of the screen.. like right at the top, nowhere near the subtitles.

Backlight Flicker

This shows how I’m seeing a flicker certain scenes. It’s like the TV can’t quite decide on the light level it’s supposed to display, and clicks into place. I’ve run this at standard speed, then slowed it right down to illustrate the flicker. You’ll need to look closely at the background & look for the light fluctuation.

FALD Confusion!

There are also instances when FALD gets in the way of drawing a background with a solid colour. This video shows a short excerpt from the film Searching (2018).. at about 52 min 30 sec. The FALD back light system has real trouble working out what to do with the dark blue satnav background which should be a solid/uniform colour.. but the bright white roads cause it a lot of problems. This could be a disadvantage of FALD over edge-lit or OLED. At least that’s the way it seems. I’m not entirely sure whether you could even solve this in software.

The blue lights/dots on the left are a reflection of the Xmas tree lights.. so nothing to do with the TV ;)

Tips & Tricks

Okay, enough with the problems, and onto the tips!

Secret Buttons!

At first glance the TV has no physical buttons to control it.. so if you’ve misplaced the remote, it looks like there’s nothing you can do. However, take a look under frame near the logo and there’s a neat directional control + OK button.

Steam Link for Free!

Instead of buying a Steam Link device, there’s actually a free app that lets you stream games from your Gaming PC to your TV. Install the app & plug your controller into the TV and you’re pretty much good to go!

Removing Adverts(!!) from the Menus

Yes, adverts.. in the menus.. on a brand new TV that you paid a lot of money for!

This video shows how I’ve been able to get rid of the movie trailers which annoyingly play so easily. They’re coming from apps like TV Plus & Rakuten. It’s not that easy to figure out how to get rid of them.. you certainly can’t uninstall the apps.. Samsung don’t let you do that :(

I found ads in the menu too.. if you go back into the policy agreements and make sure you’ve not ticked any of the options to agree to them, the ads should go away.

Here’s a link to the Q9FN on Amazon; https://amzn.to/2IvwmkO

Installing SABnzbd on a Raspberry Pi running OSMC

For quite some time I’ve been running SABnzbd on a PC, downloading files, and then transferring them over the local network to a USB drive attached to a Raspberry Pi which is running OSMC. There’s a Linux version of SABnzbd which means I can cut out the PC and have the Pi handle the downloads. It’ll mean I can queue up the downloads from a web interface running on whatever device I have to hand, like an iPad.

First Try

The initial installation of SABnzbd was quite easy;

sudo apt-get install python-openssl unrar par2

sudo apt-get install sabnzbdplus

Edit the settings so that the web client starts up on port 8085..

sudo nano /etc/default/sabnzbdplus

USER=osmc
HOST=0.0.0.0
PORT=8085

sudo service sabnzbdplus restart

This then allowed me to connect to SABnzbd and transfer over all my settings that I was using on my PC.

Delayed Start

What I found was that SABnzbd started before the USB drive was properly mounted by OSMC, so I disabled the main service from starting up, and added a script to wait for the USB drive to get mounted at a particular path.

There was a good forum post here that pointed me in the right direction.

Disable the default service…

sudo update-rc.d sabnzbdplus disable

Write a quick shell script to wait for the directory/USB drive to be mounted…

nano /home/osmc/startsabnzb.sh

#!/bin/sh

# Wait for this folder to be mounted...
DIR=/media/Elements

while [ ! -d "$DIR" ]; do
sleep 120
done

/etc/init.d/sabnzbdplus start

chmod a+x /home/osmc/startsabnzb

Add the script to system startup…

sudo nano /etc/rc.local

/home/osmc/startsabnzb.sh

Upgrading

The version of SABnzbd that installed above was very dated. That repo doesn’t get updated very often. Here’s how I updated it to the latest version.

sudo su root

echo "deb http://ppa.launchpad.net/jcfp/nobetas/ubuntu xenial main" | tee -a /etc/apt/sources.list
echo "deb http://ppa.launchpad.net/jcfp/sab-addons/ubuntu xenial main" | tee -a /etc/apt/sources.list

apt-key adv --keyserver hkp://pool.sks-keyservers.net:11371 --recv-keys 0x98703123E0F52B2BE16D586EF13930B14BB9F05F

sudo apt-get update

Upgrading sabyenc

This solved the issue where SABnzbd was complaining that sabyenc wasn’t the right version. It uses the 2nd repo (sab-addons) we added in the steps above.

sudo apt-get install python-sabyenc

Final thoughts

SABnzbd runs quite well on the Pi. It is a lot slower than it was on a PC.. it only manages about 3 MB/s on the download on a wired connection (compared to 6 MB/s on a Wifi connection on a laptop), and unpacking is slow.

However, the files are unpacked onto the device which I was manually copying the files to anyway, so that saves time.

Samsung Galaxy S9 Super Slow-Mo Videos

The Samsung Galaxy S9 is a bit of a bargain if you look at the International versions. When it retailed for about £700 in the UK, you could import it from Italy for about £500! It’s available now on Amazon for £480.

When I bought the S9 I didn’t realise it had a special video mode called Super Slow-Mo. I’ve had slow mo on a Canon Ixus camera before, but it was poor resolution and needed a lot of light to work.

The S9 shoots super slow-mo at 960 frames per second, and captures 0.2 seconds of motion, stretching it to 6 seconds when you play it back.

One big difference with the S9 is how you trigger super slow-mo. When its set to automatic, a yellow square appears over the video preview.. when you start recording, the S9 looks for motion within the square, and when it sees movement it’ll trigger super slow-mo.

For me, the automatic mode makes it more than a gimmick .. you can get some great videos out of it without much effort. Here are a few (non-professional) examples that I’ve taken over the past few months;

 

 

 

 

 

 

These were quite simple to capture.. just practice a bit to see how it triggers and you can get good results.

Sharing attached USB storage in OSMC using NFS

As well as being attached to the living room TV for use as a media centre, I also wanted to be able to use my Raspberry Pi 3 B+ as a simple NAS for other TVs in the house to stream from.

The Pi I’m using has a 1Tb desktop hard drive attached to it over USB, and I wanted a way to easily share the contents. It was actually relatively easy to set up… this is how to do it in OSMC;

  1. Install SSH to OSMC via the Store
  2. Now you can remote shell into the Pi to set up the network share
  3. Install NFS services using the following command;
    sudo apt-get install nfs-kernel-server
  4. Edit the file shares;
    sudo nano /etc/exports

    Add a share like this;

    /media 192.168.1.0/255.255.255.0(rw,fsid=0,insecure,no_subtree_check,async,crossmnt)

    (crossmnt fixed an issue where I could see the folders but no files)

  5. Restart the NFS service;
    sudo /etc/init.d/nfs-kernel-server restart

That’s it.. you should now be able connect to the Raspberry Pi and see the files on any of the USB drives you’ve got attached.

Automated mains socket power-off for OSMC on a Raspberry Pi

I’ve chosen to replace an ageing mini-PC which I’ve used since 2010 with a new Raspberry Pi 3 B+ running OSMC. It makes for a really capable media centre which can playback newer h.265 HEVC video files at 1080p without any problems, or it can serve 4K files over NFS to a box with a hardware h.265 chip like the Fire TV Box.

This form factor is easy to take on holiday and you can use an old infrared remote control (or Harmony learning remote) with it too.

However, the one thing I’ve struggled with is how to make it easy for my family to use in regard to switching it on and off. The Pi doesn’t have a power button. Some power supplies have an inline rocker switch, which almost fits the bill. I wanted something more automated.

Fortunately I had a spare Energenie power socket from a previous project where I use one to turn off our bass speaker when the TV isn’t on. These power sockets are controlled remotely (over RF) from a Pi which you attach an Energenie control board/shield to.

What I’ve done with the Pi 3 is have it powered through an Energenie socket, and set up a service that executes when it detects OSMC is shutting down. That service will make a quick HTTP call to the Pi with the Energenie controller shield, which will in turn send an RF signal to turn the mains socket off.

 

osmcshutdown

 

Here’s how you can set it up like I have…

Scripts for the Pi running OSMC

First, add a new service script.. create a new file in this folder;

/etc/systemd/system/callenergenie.service


[Unit]
Description=Energenie Remote Call to Secondary Pi
Before=multi-user.target
After=network.target
Conflicts=shutdown.target
[Service]
ExecStart=/bin/true
ExecStop=/bin/sh /home/osmc/callenergenie.sh
Type=oneshot
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

Then enable it with;

sudo systemctl enable callenergenie.service

There are a couple of useful things happening in this service, the After parameter makes sure the code runs before the network code is shut down, and Conflicts parameter is looking for OSMC shutting down.

Now add a helper script… this will make the webserver call as a background task, so control will be given back to the service immediately, rather than it waiting for the wget to complete.

/home/osmc/callenergenie.sh


echo Calling energenie socket…
wget –quiet –background –output-document="callenergenie.log" "http://192.168.1.99/callenergenie.php?delay=10&switch=2&state=off"
echo Sleeping for 2 seconds
sleep 2
echo Done.
exit 0

This calls the PHP script, telling it which socket to turn off, and how long to delay before sending the command, which we’re doing so that the Pi has time to shut down before the power is cut.

Scripts for the Pi with the Energenie shield

This is the PHP script I added to the other Pi which was already configured to be a PHP web server.

/var/www/html/callenergenie.php


<?php
/* MattC – Call this with various parameters..
callenergenie.php?
delay = time in seconds to sleep before calling Energenie
switch = which socket to talk to
state = turn socket on/off
e.g. callenergenie.php?delay=5&switch=2&state=off
*/
print ("Waiting ".$_GET['delay']." seconds");
sleep($_GET['delay']);
print ("Switching socket ".$_GET['switch']." ".$_GET['state']);
exec("sudo python /var/www/callenergenie.py ".$_GET['state']." ".$_GET['switch']);
?>

To allow PHP to run the script as root, I needed to add the Apache user to the list of sudo-ers.. not that secure tho :( I’d be interested in anyone who knows how to run the Energenie scripts a regular user.. their Python doesn’t like it when it’s not root.

nano /etc/sudoers

www-data ALL=(ALL) NOPASSWD:ALL

The nice thing about the PHP script is that we can actually call it to turn the Pi on remotely too.. so you could configure that into a widget on your phone, or add it to Alexa.

Basic Automated Testing using FeatherTest for Chrome

There are a fair number of automated testing tools out there like Intern or Puppeteer, but I wanted something super simple and quick to set up to test a variety of pages on a site I develop.

I was making changes to JSON-LD, page metadata, and some of the data that’s sent to Omniture analytics. All the pages types are slightly different, containing things like Documents, Discussions, Blog Posts, landing pages, etc. So I needed to go to a variety of URLs on my local dev environment to see what certain DOM elements got set to, and check the contents of some JavaScript variables.

I’d then need to do the same checks against our Dev, Staging and Production servers to make sure they all looked correct there too.

After a bit of searching I came across FeatherTest which is fed a text file where you use JavaScript/jQuery to define & run the tests.

It’s very easy to set up tests, and the same test script file can be run against whatever site you’re looking at in your browser. For more info go here;

https://xaviesteve.com/5302/feathertest-automated-website-testing-extension-google-chrome/

Typical FeatherTest Script Structure + Syntax

Tip 1 – Preserve Log

The output from FeatherTest goes into the console, so if you’re testing multiple pages, you’ll need to check the ‘Preserve Log’ option, otherwise you’ll lose the output as Chrome navigates between pages.

Tip 2 – Output Formatting

When writing scripts, I’d recommend colouring the console output, and prefixing each line with something easily identifiable. In my case I’m using ‘SUDO – ‘ and colouring the text orange.

You can then simply filter the console to just see your output;

When you’ve filtered the output, you can then save it to a file (right-click, save as..) and format it into shape.

I’ve found this really useful to monitor how my SEO data has improved as I’ve made changes, and to sanity check I’ve not broken anything between releases.

Tip 3 – Variable access

Some of the test scripts I’ve written needed access to variables defined in the scope of the main window.. FeatherTest can’t normally access these variables, so we need a helper function.

The helper function either needs baking into your site’s JavaScript, or you can inject it using an extension like TamperMonkey. Using TamperMonkey means you can use it on whatever site you want, not just ones where you’re able to install the function on.

My code adds an event listener that you can call from FeatherTest which you can request variable values from.

e.g.

window.postMessage({ "action": "variable", "value": "myVar.hierarchy"}, window.origin);

This is the TamperMonkey/GreaseMonkey script I use;


// ==UserScript==
// @name FeatherTest Variable Support
// @namespace http://mysite
// @include http*://mysite/*
// @description Add support for FeatherTest variable access. Matt Collinge.
// @version 1.0
// @grant all
// ==/UserScript==
function featherTestSupport() {
if (typeof(window.addEventListener)!='undefined') window.addEventListener('message',function(event) {
// Make sure the request is coming from our own site (or FeatherTest)
if (event.origin !== window.origin) return;
if (typeof(event.data.action)=='undefined') return;
// Instead, I used this from StackOverflow; https://stackoverflow.com/questions/11924731/get-object-by-name-as-string-without-eval
if (event.data.action=='variable') {
var variableValue = event.data.value.split('.').reduce(function (object, property) {
return object[property];
}, unsafeWindow);
console.log('%c'+event.data.value+' = '+variableValue, 'color:orange');
}
},false);
}
featherTestSupport();

FeatherTest Script – Example 1 – DOM Lookup

Pulling data out of the DOM is straightforward;


'feathertest'
console.log('%cSUDO – HOMEPAGE', 'color:orange');
location.href = '/site_root/'
60000
console.log('%cSUDO – '+location.href, 'color:orange');
console.log('%cSUDO – meta_title = '+$('meta[name="title"]').attr('content'), 'color:orange');
console.log('%cSUDO – meta_og:title = '+$('meta[property="og:title"]').attr('content'), 'color:orange');
console.log('%cSUDO – meta_description = '+$('meta[name="description"]').attr('content'), 'color:orange');
console.log('%cSUDO – meta_og:description = '+$('meta[property="og:description"]').attr('content'), 'color:orange');
console.log('%cSUDO – meta_og:image = '+$('meta[property="og:image"]').attr('content'), 'color:orange');
console.log('%cSUDO – meta_keywords = '+$('meta[name="keywords"]').attr('content'), 'color:orange');
console.log('%cSUDO – ', 'color:orange');

FeatherTest Script – Example 2 – Variable Access & JSON-LD

To access variables we’ll use the TamperMonkey function I posted above. We can also pull out any JSON-LD splats and present them quite nicely in the console output.


'feathertest'
console.log('%cSUDO – HOMEPAGE', 'color:orange');
location.href = '/site_root/'
60000
console.log('%cSUDO – '+location.href, 'color:orange');
// Output JSON-LD on page
$('script[type="application/ld+json"]').each(function(index,json){ console.log('%cSUDO – '+JSON.stringify(JSON.parse(json.innerHTML),null,2), 'color:orange') });
// Output the contents of some JavaScript variables
window.postMessage({ "action": "variable", "value": "myVar.hierarchy"}, window.origin);
window.postMessage({ "action": "variable", "value": "myVar.channel"}, window.origin);
window.postMessage({ "action": "variable", "value": "currentContainer.name"}, window.origin);
console.log('%cSUDO – ', 'color:orange');

 

Adding IR Remote Control Support to the Raspberry Pi

In my last post I took you through how I created a small portable media centre that I can easily take on holiday to hook up to the hotel TV.

To reduce the amount space it took up, I used a cheap USB keypad which could be used to control the media center. It worked really well & having something hard-wired meant I didn’t have to worry about a Bluetooth-paired device needing re-pairing.

However, what I then realised was it would be good to be able to use a spare remote control instead. I was using the OpenElec distribution and looked through their documentation for how to do this, but only found references to version 3 of the software (it’s on version 7) and how to get LIRC working with it. There were plenty of blog posts on hooking up IR support, but a lot of them were written 2-3 years ago, and the software has moved on somewhat.

Hardware Setup

What I did first was buy a suitable IR receiver. I chose the Vishay TSOP4838 (which costs less than £1) because of the voltage range (2.5-5.5v) and receiver frequency (38KHz). If you look at the datasheet for the product, you’ll see which pins should get wired up to the Pi;

Simply wire pin 1 to GPIO 18, pin 2 to GND, and pin 3 to a 3.3v power pin, e.g.

By using some short F-F jumper wires and a small cut in the side of the case, I was able to position the reciever neatly(ish) on the side.. it’s still easily removable, but you could integrate it into the case a bit more seamlessly than this ;)

Software Setup

Before this project I was using OpenElec, but had limited success getting the IR support working properly. I switched to OSMC which I’d read had better IR support through the main UI. I think I was actually on the right track with OpenElec, but I realised later that the old vintage Xbox remote I was trying to use wasn’t 100% working.

If you’re going to use a remote control that’s officially recognised, then you can jump this part about learning IR remote control codes.

Learning IR remote commands

The remote I found in the loft was an old DVD player remote which (unsurprisingly) wasn’t in the list of pre-recognised remotes in the OSMC installation. I needed to get the Pi to learn the IR pulses being sent out by the remote and map them to the Kodi functions.

1. First off, you need to telnet to the Pi. Username: osmc, Password: osmc.

2. Next you need to stop the LIRC service which is being locked/used by Kodi

sudo systemctl stop lircd_helper@lirc0

3. Now you can run the IR learn mode.. this will record what it finds to the config file you specify;

irrecord -d /dev/lirc0 /home/osmc/lircd.conf

4. Follow the on-screen instructions which will recognise your remote.

One observation I had was that this only worked properly if I stopped after the first prompt to press lots of keys on the remote.. if I completed the second stage, the key mapping didn’t work, e.g.

If I ignored the second phase & let it abort, the learn process worked

When it’s working, you’ll be able to enter the Kodi function (like KEY_UP, KEY_DOWN, etc) & map it to a key press on your remote;

Once you’ve mapped all the functions you want, we then need to move back to OSMC and tell it to use that config file we’ve just written.

OSMC Settings

In OSMC you need to do the following;

1. Disable the CEC service (via System Settings > Input > Peripherals > CEC Adapter), which seems to be needed for LIRC to work.

2. Now go into OSMC settings and pick the Raspberry Pi icon

3. Go into Hardware Support and enabled LIRC GPIO Support. You shouldn’t need to change anything if you connected the sensor to GPIO 18.

4. Now go back and select the Remote Control option.

5. Ignore the list of pre-installed remotes and select Browse;

6. Navigate to the folder where LIRC wrote your config file;

7. Confirm the change & reboot the box;

That should be it.. your remote should be able to control everything in Kodi.

%d bloggers like this: