WiFi QR Code Generator

Whenever we arrive at a place that we have not been before it is important to get properly connected to the internet.

Finding wifi SSIDs and typing passwords is tedious and prone to errors. There is an easier way of course!

The owner of the wireless network can generate a QR code that you can easily take a photo of and your phone will automatically prompt you to log into the wireless network without you having to type anything.

On your phone it looks like this:

To generate these QR codes that contain all information for visitors/new users to connect this simple tool / online generator can be used:

Ever wanted to create a cool QR code for your guests? But never wanted to type in your WiFi credentials into a form that submits them to a remote webserver to render the QR code? QiFi for the rescue! It will render the code in your browser, on your machine, so the WiFi stays as secure as it was before (read the code if you do not trust text on the internet :-))!

Qifi.org

Don’t worry: your access point information is not transferred over the internet. As this is open source at the time of writing the data was held in HTML 5 local storage on the local browser only and not transferred out.

electronic firecracker: Chuwi Hi10 AIR Tablet

The Android tablets I am using for my kitchen scale display and for myfitnesspal data-entry are aging quite bad and apart from the near-display death of one of the tablets both are not supported and updated anymore.

Using them therefore poses an increasing risk. After one of them almost died on me I was determined to replace them both. Looking at alternatives at the lowest possible price quickly showed that I am not going to get another Android tablet.

Instead I was ready to give a chinese company a chance:

I ordered it on 24th of June and it was delivered today. All in all I’ve paid 136 Euro for the tablet and 45 Euro for the keyboard attachement.

Despite the ridiculously low price this thing is quite impressive. It’s sporting a fast-enough Intel Atom processor with 1.4 ghz and 4 Gbyte of RAM. The 64 Gb of solid-state storage where quickly upgraded by an additional 400 Gb MicroSD card for local data storage.

As of writing this it’s still installing and updating the Windows 10 to 1903 but so far I am beyond impressed.

I’ll write more about the device when I’ve had more time to use it. One word for the keyboard attachement: the keyboard is good-enough. Not great but better than for example that on the Pinebook. The touchpad is very small but works – the thing has a Touchscreen anyway.

bridge the openvpn-status.log to MQTT

For big parts of my VPN needs I use OpenVPN. Especially on iOS devices the deep integration just works. Even to a degree that you enable the VPN once and the device will transparently keep it up / reestablish connections when required.

OpenVPN protocol has emerged to establish itself as a de- facto standard in the open source networking space with over 50 million downloads. OpenVPN is entirely a community-supported OSS project which uses the GPL license.

VISIT THE OPENVPN COMMUNITY

I am using the dockerized version of OpenVPN. From there I’ve got several ways to get telemetry data (like connections, traffic, …) out of it. One way is the management interface provided by OpenVPN. Another way is by using the default openvpn-status.log file.

Since the easiest way out-of-the-box was to use the logfile I sat down and wrote a little 2mqtt bridge for the contents of the logfile.

It’s also dockerized so you can easily set it up by pointing the openvpn-status.log to the right volume/mount-point.

When done it’ll produce MQTT messages like this:

The set-up and start-up is rather simple:

docker run -d --restart=always --volume /openvpnstatus2mqttconfiglocation/:/configuration --volume /openvpnstatusloglocation/:/openvpn openvpn-status2mqtt

MQTT Broker, Topic-Prefix and so on are configured with the .json configuration file found along the project.

Of course everything I wrote is available on GitHub as open-source.

The immediate outcome of this is that with the always-on VPN I am now getting statistics about, for example, the data consumption of my iPhone.

The big traffic spike at 1 AM is the backup that my iPhone does every night. Very interesting also how often the connection is closed and opened again even without me using the phone…

IP-over-DNS

Picture yourself in this situation. You connect to a network and nothing works. Except for this:

It is quite common to have DNS working in networks while everything else is not. Sometimes the network requires a log-in to give you access to a small portion of the internet.

Now, with the help of a tool called iodine, you can get access to the full internet with only DNS working in your current network:

iodine lets you tunnel IPv4 data through a DNS server. This can be usable in different situations where internet access is firewalled, but DNS queries are allowed. 

It runs on Linux, Mac OS X, FreeBSD, NetBSD, OpenBSD and Windows and needs a TUN/TAP device. The bandwidth is asymmetrical with limited upstream and up to 1 Mbit/s downstream. 

iodine

Setting it up is a bit of work but given that you are anyway having access to a well connected server on the free portion of the internet it can be easily done.

Of course the source is on github.

“are you still watching?” – tailfix

drop in replacement for tail -F that asks you if you are still watching

Just like Laura I am also was having a moment when I stumbled across tailflix.

For those not understanding the reference: At the end of an episode you’ve watched on Netflix you will be shown another one, and so on, and so on. Until if you have not touched the remote at all for several episodes Netflix will ask you “Are you still watching?”.

more small projectors coming

Good news everyone!

There are new micro-projectors coming and they are looking good. After the first availability of really small and low-power but enough-light LED projectors (see here) manufacturers have apparently added a laser to the equation.

It is said that…

Unlike traditional projectors that constantly need to be focussed, the Nebra AnyBeam picture is always perfect. You can project onto curved or irregular surfaces with ease – perfect if you want to use it on the fly, or if you want to try your hand at a bit of projection mapping!

This thing will be available, according to the Kickstarter, as a all-in-one package with power and HDMI inputs. It’s got 720p inputs. Well. Well really?

And it will be available to the maker market as a RaspberryPi Hat…

disk down, rinse and repeat.

After the sudden death of a hard drive in one of the house’ storage arrays (after 55997 hours of service) beginning of this month it has happened again:

With less than half the runtime of the previously dead disk this one is an early failure. Well within the warranty. Therefore the disk is already en route to be replaced by an RMA (Western Digital RMA process so far is spotless!)

Anyhow: This was a 4 TB drive. It’s in an array with 2-disk redundancy and 8 other drives. So the array still is operating with redundancy right now. Additionally a full backup exists as well as a hot-standby (but slower) offsite mirror.

I am quite confident to not loose data. But this raid-sync is going to take a bit longer. As drives get bigger, syncs get longer.

archive your slice of the web

Many use and love archive.org. A service that roams the public internet and archives whatever it finds. It even creates timelines of websites so you can dive right into history.

Have a piece of history right here:

You can have something similar hosted in your own environment. There are numerous open source projects dedicated to this archival purpose. One of them is ArchiveBox.

ArchiveBox takes a list of website URLs you want to archive, and creates a local, static, browsable HTML clone of the content from those websites (it saves HTML, JS, media files, PDFs, images and more).

I’ve done my set-up of ArchiveBox with the provided Dockerfile. Every once in a while it will start the docker container and check my Pocket feed for any new bookmarks. If found it will then archive those bookmarks.

As the HTML as well as PDF and Screenshot is saved this is extremely useful for later look-ups and even full-text search indexing.

Miataru – open source location tracking

Not a lot of things are more private than your location.

Yet sometimes you wish to share your location with friends and family. May it be during an event or regularly. Maybe you want to

To allow the tech-minded audience to be in full control of what data is aggregated and stored regarding these needs I’ve created Miataru back in 2013 as an open-source project from end-2-end.

With the protocol being completely open and ready to be integrated into any home automation interested users can either utilize the publicly available (stores-nothing-on-disk) server or host your own.

Everything from the server to the clients is available in source and there’s a ready-to-go version of the client app on the AppStore.

this is a location sharing session when the blue pin met the yellow pin

pushing your myfitnesspal data to MQTT

MyFitnessPal is a great online service we are using to track what we eat. It’s well integrated into our daily routine – it works!

Unfortunately MyFitnessPal is not well set-up to interface 3rd party applications with it. In fact it appears they are actively trying to make it harder for externals to utilize the data there.

To access your data there’s an open source project called “python-myfitnesspal” which allows you to interface with MyFitnessPal from the command line. This project uses web-scraping to extract the information from the website and will break everytime MyFitnessPal is changing the design/layout.

Since the output for this would be command line text output it is not of great use for a standardized system. What is needed is to have the data sent in a re-useable way into the automation system.

This is why I wrote the additional tool “myfitnesspal2mqtt“. It takes the output provided by python-myfitnesspal and sends it to an MQTT topic. The message then can be decoded, for example with NodeRed, and further processed.

As you can see in the image above I am taking the MQTT message coming from myfitnesspal2mqtt and decoding it with a bit of javascript and outputting it back to MQTT.

var complete = {};
var sodium = {};
var carbohydrates = {};
var calories = {};
var daydate = {};
var fat = {};
var sugar = {};
var protein = {};

var weight = {};
var bodyfat = {};


var goalsodium = {};
var goalcarbohydrates = {};
var goalcalories = {};
var goalfat = {};
var goalsugar = {};
var goalprotein = {};

var caloriesdiff = {};

var ttopic = msg.topic.toLowerCase();

var firstobject = Object.keys(msg.payload)[0];

complete.payload = msg.payload[firstobject].complete;
complete.topic = ttopic+'/complete';

sodium.payload = msg.payload[firstobject].totals.sodium;
sodium.topic = ttopic+'/total/sodium';
carbohydrates.payload = msg.payload[firstobject].totals.carbohydrates;
carbohydrates.topic = ttopic+'/total/carbohydrates';
calories.payload = msg.payload[firstobject].totals.calories;
calories.topic = ttopic+'/total/calories';
fat.payload = msg.payload[firstobject].totals.fat;
fat.topic = ttopic+'/total/fat';
sugar.payload = msg.payload[firstobject].totals.sugar;
sugar.topic = ttopic+'/total/sugar';
protein.payload = msg.payload[firstobject].totals.protein;
protein.topic = ttopic+'/total/protein';

weight.payload = msg.payload[firstobject].measurements.weight;
weight.topic = ttopic+'/measurement/weight';
bodyfat.payload = msg.payload[firstobject].measurements.bodyfat;
bodyfat.topic = ttopic+'/measurement/bodyfat';

goalsodium.payload = msg.payload[firstobject].goals.sodium;
goalsodium.topic = ttopic+'/goal/sodium';
goalcarbohydrates.payload = msg.payload[firstobject].goals.carbohydrates;
goalcarbohydrates.topic = ttopic+'/goal/carbohydrates';
goalcalories.payload = msg.payload[firstobject].goals.calories;
goalcalories.topic = ttopic+'/goal/calories';
goalfat.payload = msg.payload[firstobject].goals.fat;
goalfat.topic = ttopic+'/goal/fat';
goalsugar.payload = msg.payload[firstobject].goals.sugar;
goalsugar.topic = ttopic+'/goal/sugar';
goalprotein.payload = msg.payload[firstobject].goals.protein;
goalprotein.topic = ttopic+'/goal/protein';

caloriesdiff.payload = msg.payload[firstobject].goals.calories - msg.payload[firstobject].totals.calories;
caloriesdiff.topic = ttopic+'/caloriedeficit';

daydate.payload = firstobject;
daydate.topic = ttopic+"/date";

return [complete, sodium, carbohydrates, calories, fat, sugar, protein, weight, bodyfat, goalsodium, goalcarbohydrates, goalcalories, goalfat, goalsugar, goalprotein, daydate, caloriesdiff];

In the end it expands into a multitude of topics with one piece of information per MQTT topic.

And with just that every time the script is run (which I do in a docker container and with a cronjob) the whole lot of pieces of information about nutrition and health stats are being pushed and stored in the home automation system.

This way they are of course also available to the home automation system to do things with it.

Like locking the fridge.

css font-feature “tnum”

Oh this is so useful for my head-up-display prototype implementation:

This feature replaces numeral glyphs set on glyph-specific (proportional) widths with corresponding glyphs set on uniform (tabular) widths. Note that some fonts may contain tabular figures by default, in which case enabling this feature may not appear to affect the width of glyphs.

tabular figures: tnum

IoP – the internet of pets – predictive maintenance of a cat

In the interesting field of IoT a lot of buzz is made around the predictive maintenance use cases. What is predictive maintenance?

The main promise of predictive maintenance is to allow convenient scheduling of corrective maintenance, and to prevent unexpected equipment failures.

The key is “the right information in the right time”. By knowing which equipment needs maintenance, maintenance work can be better planned (spare parts, people, etc.) and what would have been “unplanned stops” are transformed to shorter and fewer “planned stops”, thus increasing plant availability. Other potential advantages include increased equipment lifetime, increased plant safety, fewer accidents with negative impact on environment, and optimized spare parts handling.

Wikipedia

So in simpler terms: If you can predict that something will break you can repair it before it breaks. This improvse reliability and save costs, even though you repaired something that did not yet need repairs. At least you would be able to reduce inconveniences by repairing/maintaining when it still is easy to be done rather than under stress.

You would probably agree with me that these are a very industry-specific use cases. It’s easy to understand when it is tied to an actual case that happened.

Let me tell you a case that happened here last week. It happened to Leela – a 10 year old white British short hair lady cat with gorgeous blue eyes:

Ever since her sister had developed a severe kidney issue we started to unobtrusively monitor their behavior and vital signs. Simple things like weight, food intake, water intake, movement, regularities (how often x/y/z).

I’ve built hardware to allow us to do that in the most simple and automated way. In the case of getting to know their weight we would simply put the kitty litter box on a heavily modified persons scale. I wrote about that already back int 2016.

When Leela now visits her litter box she is automatically weighed and it’s taken note that she actually used it.

A lot of data is aggregated on this and a lot of things are being done to that data to generate indications of issues and alerts.

This alerted us last weekend that there could be an issue with Leelas health as she was suddenly visiting the litter box a lot more often across the day.

We did not notice anything with Leela. She behaved as she would everyday, but the monitoring did detect something was not right.

What had happened?

The chart shows the hourly average and daily total visits to the litterbox.

On the morning of March 9th Leela already had been to the litter box above average. So much above average that it tripped the alerting system. You can see the faded read area in the top of the graph above showing the alert threshold. The red vertical line was drawn in by me because this was when we got alerted.

Now what? She behaved totally normal just that she went a lot more to the litter box. We where concerned as it matched her sisters behavior so we went through all the checklists with her on what the issue could be.

We monitored her closely and increased the water supplied as well as changed her food so she could fight a potential bladder infection (or worse).

By Monday she did still not behave different to a degree that anyone would have been suspicious. Nevertheless my wife took her to the vet. And of course a bladder infection was diagnosed after all tests run.

She got antibiotics and around Wednesday (13th March) she actually started to behave much like a sick cat would. By then she already was on day 3 of antibiotics and after just one day of presumable pain she was back to fully normal.

Interestingly all of this can be followed up with the monitoring. Even that she must have felt worse on the 13th.

With everything back to normal now it seems that this monitoring has really lead us to a case of “predictive cat maintenance”. We hopefully could prevent a lot of pain with acting quick. Which only was possible through the monitoring in place.

Monitoring pets is seemingly becoming a thing – which lead to my rather funky post title declaration of the “Internet of Pets”. I know about a certain Volker Weber who even wrote in the current c’t magazine about him monitoring his dogs location.

Health is a huge topic for the future of devices and gadgets. Everyone will casually start to have more and more devices in their daily lifes. Unfortunately most of those won’t be under your own control if you do not insist on being in control.

You do not have to build stuff yourself like I did. You only need to make the right purchase decisions according to things important to you. And one of these things on that checklist should be: “am I in full control of the data flow and data storage”.

If you are not. Do not buy!

By coincidence the idea of having the owner of the data in full control of the data itself is central to my current job at MindSphere. With all the buzz and whistles around the Industry IoT platform it all breaks down to keep the actual owner of the data in control and in charge. A story for another post!

using calendars to automate your home

When you want to make things happen on a schedule or log them down when they took place a calendar is a good option. Even more so if you are looking for an intuitive way to interact with your home automation system.

Calendars can be shared and your whole family can have them on their phones, tablets and computers to control the house.

In general I am using the Node-Red integration of Google Calendar to send and receive events between Node-Red and Google. I am using the node-red-node-google package which comes with a lot of different options.

Of course when you are using those nodes you need to configure the credentials

Part 1: Control

So you got those light switches scattered around. You got lots of things that can be switched on and off and controlled in all sorts of interesting ways.

And now you want to program a timer when things should happen.

For example: You want to control when a light is being switched on and when it’s then again been switched off.

I did create a separate calendar on google calendar in which I am going to add events to in a notation I came up with: those events have a start-datetime and of course an end-datetime.

When I now create an event with the name “test” in the calendar…

And in Node-Red you would configure the “google calendar in”-Node like so:

When you did wire this correctly everytime an event in this calendar starts you will get a message with all the details of the event, like so:

With this you can now go crazy on the actions. Like using the name to identify the switch to switch. Or the description to add extra information to your flow and actions to be taken. This is now fully flexible. And of course you can control it from your phone if you wanted.

Part 2: Information

So you also may want to have events that happened logged in the calendar rather than a plain logfile. This comes very handy as you can easily see this way for example when people arrived home or left home or when certain long running jobs started/ended.

To achieve this you can use the calendar out nodes for Node-Red and prepare a message using a function node like this:

var event = {
'summary': msg.payload,
'location': msg.location,
'description': msg.payload,
'start': {
'dateTime': msg.starttime,//'2015-05-28T09:00:00-07:00',
'timeZone': 'Europe/Berlin'
},
'end': {
'dateTime': msg.endtime,//'2015-05-28T17:00:00-07:00',
'timeZone': 'Europe/Berlin'
},
'recurrence': [
//'RRULE:FREQ=DAILY;COUNT=2'
],
'attendees': [
//{'email': 'lpage@example.com'},
//{'email': 'sbrin@example.com'}
],
'reminders': {
'useDefault': true,
'overrides': [
//{'method': 'email', 'minutes': 24 * 60},
//{'method': 'popup', 'minutes': 10}
]
}
};
msg.payload = event;
return msg;

And as said – we are using it for all sorts of things – like when the cat uses her litter box, when the washing machine, dryer, dishwasher starts and finishes. Or simply to count how many Nespresso coffees we’ve made. Things like when members of the household arrive and leave places like work or home. When movement is detected or anything out of order or noteable needs to be written down.

And of course it’s convenient as it can be – here’s the view of a recent saturday:

pushing notifications in home automation

I was asked recently how I did enable my home automation to send push notifications to members of the household.

The service I am using on which all of our notification needs are served by is PushOver.

Pushover gives you a simple API and a device management and allows you to trigger notifications with icons and text to be sent to either all or specific devices. It allows to specify a message priority so that more or most important push notifications even are being pushed to the front when your phone is set on do-not-disturb.

The device management and API, as said, is pretty simple and straight forward.

apparently we’re sending a lot of notifications to these devices…

As for the actual integration I am using the NodeRed integration of Pushover. You can find it here: node-red-contrib-pushover.

With the newest client for iOS it even got integration for Apple Watch. So you not only are limited to text and images. You can also send our a state that updates automatically on your watch face.

As Pushover seems consistent in service and bringing updates I don’t miss anything – yet I do not have extensively tested it on Android.

bringing the thinclient back

I had to solve a problem. The problem was that I did not wanted to have the exact same session and screen shared across different work places/locations simultaneously. From looking at the same screen from a different floor to have the option to just walk over to the lab-desk solder some circuits together and have the very same documents opened already and set on the screens over there.

One option was to use a tablet or notebook and carry it around. But this would not solve the need to have the screen content displayed on several screens simultaneously.

Also I did not want to rely on the computing power of a notebook / tablet alone. Of course those would get more powerful over time. But each step would mean I would have to purchase a new one.

Then in a move of desperation I remembered the “old days” when ThinClients used to be the new-kid in town. And then I tried something:

I just recently had moved all house server infrastructure over to Linux and Docker. So what would keep me from utilizing the computing power of that one beefy server in the basement to host all of my desktop needs?

It turns out: Nothing really. Docker is well prepared to host desktop environments. With a bit of tweaking and TigerVNC Xvnc I was able to pre-configure the most current Ubuntu to start my preferred Mate desktop environment in a container and expose it through VNC.

If you wanted to replicate this I would recommend this repository as a starting point.

Even better I found that the RaspberryPi single board computers come with a free pre-licensed and accelerated version of RealVNC.

So I took one of those RaspberryPis, booted up the Raspbian Desktop lite and connected to the dockers VNC port. It all worked just like that.

this is the RaspberryPi client with the windowed docker container VNC session

The screenshot above holds an additional information for you. I wanted sound! Video works smooth up to a certain size of the moving video – after all those RaspberryPis only come with sub Gbit/s wired networking. But to get sound working I had to add some additional steps.

First on the RaspberryPI that you want to output the sound to the speakers you need to install and set-up pulseaudio + paprefs. When you configure it to accept audio over the network you can then configure the client to do so.

In the docker container a simple command would then redirect all audio to the network:

pax11publish -e -S thinclient

Just replace “thinclient” with the ip or hostname of your RaspberryPI. After a restart Chrome started to play audio across the network through the speakers of the ThinClient.

Now all my screens got those RaspberryPIs attached to them and with Docker I can even run as many desktop environments in parallel as I wish. And because VNC does not care about how many connections there are made to one session it means that I can have all workplaces across the house connected to the same screen seeing the same content at the same time.

And yes: The UI and overall feel is silky smooth. And since VNC adapts to some extend to the available bandwidth by changing the quality of the image even across the internet the VNC sessions are very much useable. Given that there’s only 1 port for video and 1 port for audio it’s even possible to tunnel those sessions across to anywhere you might need them.

work and walk

Working in the IT industry requires us to spend copious amounts of time focused on our screens mostly sitting at our desks. But this does not have to be that way.

For me sitting down for long times creates a lot of unwanted effects and essentially leads to me not being able to focus anymore properly.

In 2015 my wife and I attacked that “health problem” as a team. And in the 12 months until 2016 we both lost 120 kg / 260lbs added up together in body weight and completely changed the way we deal with food and sport.

With that I also changed the way I work. Sitting down was from now on the exception.

Coincident with this lifestyle change my then-employer Rakuten rolled out it’s then new workplace concept and everyone got great electric stand-up desks that allowed you to change the height up and down effortlessly.

When I started with SIEMENS of course their workplace concept included standing desks as well!

For those times I am working from home one of the desks is equipped with a standing desk with an additional twist.

pictured is a LifeSpan TR1200-DT5

So this desk let’s you work while standing. But it also allows you to walk while you work. You can set the speed from 0 to 6.4 km/h.

Given a good headset I personally can attend conference calls without anyone noticing I am walking with about 4 km/h paces.

When I am spending a whole day working from this desk it is not uncommon to accumulate 25-40 km of total distance without really noticing it while doing so. Of course: later the day you’ll feel 40km in one way or the other

monthly distance … something between 50 and 350km … only 50km at the desk when walking outside is possible.

It took a bit of getting used to as your feet are doing something entirely different from what the rest of the body is doing. But at least for me it started to feel natural very quickly.

I’ve put two curved 24″ monitors onto it and aside from the docking ports for a company notebook I am using thinclients to get my usual work machines screens teleported there. There’s a bit of a media set-up as well as sometimes I am using one of the screens for watching videos.

For those now interested in the purchase of such a great walking desk: I can only recommend doing so! But be aware of some thoughts:

There are not a lot of vendors of such appliances. And those vendors are not selling a lot of them. This means: be ready for a € 1000+ purchase and be ready to shell out some good money on extended warranties.

My first desk + treadmill was replaced 3 times before. It was LifeSpans first generation of treadmill desks and it just kept exploding. I actually had glowing sparks of fire spitting out of the first generation treadmill.

I’ve returned it for no money loss and waited for the second generation. This current, second generation of LifeSpan treadmill desks is really doing it for me for longer than the first generation ever had without breaking. Looking at the use of the device I would see it as a purchase over 5 years. After 5 years of actual and consistent use I wouldn’t be overly annoyed if the mechanical parts of the appliance would stop working. I am not expecting such a device to live much longer anyhow.

Energy consumption wise it’s quite impressive how much energy this thing consumes. I wasn’t quite expecting those levels. So here’s for you to know:

this is a 28km workday with some pauses

So just around 500 Watts when in use. The 65W base load is the monitors and computers on top.

did you know: Linus Torvalds used to use a first generation model – the one that broke on me many times

I can only recommend to try something like this out. Unfortunately it’s quite hard to find a place to try it out. At least I was not able to try before buy.

But then again I could answer your questions if you had any.

wireless mesh network

Since AVM has started to offer wireless mesh network capabilities in their products through software updates I started to roll it out in our house.

Wireless mesh networks often consist of mesh clients, mesh routers and gateways. Mobility of nodes is less frequent. If nodes constantly or frequently move, the mesh spends more time updating routes than delivering data. In a wireless mesh network, topology tends to be more static, so that routes computation can converge and delivery of data to their destinations can occur. Hence, this is a low-mobility centralized form of wireless ad hoc network. Also, because it sometimes relies on static nodes to act as gateways, it is not a truly all-wireless ad hoc network.

Wikipedia

With the rather complex physical network structure and above-average number of wireless and wired clients the task wasn’t an easy one.

To give an impression of what is there right now:

So there’s a bit of almost everything. There’s wired connections (1Gbit to most places) and there is wireless connections. There are 5 access points overall of which 4 are just mesh repeaters coordinated by the Fritz!Box mesh-master.

There’s also powerline used for some of the more distant rooms of the mansion. All in all there are 4 powerline connections all of them are above 100 Mbit/s and one even is used for video streaming.

All is managed by a central Fritz!Box and all is well.

Like without issues. Even interesting spanning-tree implementations like from SONOS are being properly routed and have always worked without issues.

The only other-than-default configuration I had made to the Fritz!Box is that all well-known devices have set their v4 IPs to static so they are not frequently switching around the place.

How do I know it works? After enabling the Mesh things started working that have not worked before. Before the Mesh set-up I had several accesspoints independently from each other on the same SSID. Which would lead to hard connection drops if you walked between them. Roaming did not work.

With mesh enabled I’ve not seen this behavior anymore. All is stable even when I move actively between all floors and rooms.

how to find out who needs to clear out the dishwasher

We use the term “smart home” lightly these days. It has become a term of marketing and phantastic stories.

Considering how readily available lots of different sensors, actors and personal-assistants are these days one would think that most people would start to expect more from the marketing “smart-home”.

I believe that the smart is to be found in the small and simple. There are a lot of small things that actually make something feel smart without it actually being smart about anything.

Being smart is something not achieved yet – not even by a far stretch of the sense of the word. So let’s put that to the sides of the discussion for now and move a simple thing in the middle of this article.

Have you ever had an argument about who should or should have cleared out the dishwasher after it’s finished?

We had.

So we outsourced the discussion and decision to a 3rd party. We made our house understand when the dishwasher starts and ends it’s task. And made it flip a coin.

There was already a power consumption monitoring in place for the dishwasher. Adding a hysteresis over that monitoring would yield a simple “starts running” / “stops running” state of the dishwasher.

Pictured above is said power consumption.

  • When the values enter the red area in the graph the dishwasher is considered to be running.
  • When it leaves that area the dishwasher is considered finished/not running

Now adding a bit of random coin-tossing by the computer and each time when the dishwasher is detected to have started work a message is sent out depending on the result of the coin-toss.

That message is published and automatically displayed on all active displays in the house (TVs/…) and sent as push notifications to all members that need to be informed of this conclusive and important decision.

In short:

Everyone gets a push notification who is going to clear out the dishwasher based upon a coin-toss by a computer every time the dishwasher starts.

The base of all of this is a Node-RED flow that that uses the power consumption MQTT messages as an input and outputs back to MQTT as well as pushes out the push notifications to phones, screens and watches.

Additionally it creates a calendar entry with the start-finish time of the dishwasher run as well as the total energy consumption for this run.

Node-RED flow

The flow works like this: on the right the message enters the flow from MQTT. The message itself contains just the value of the power consumed at this very moment. In this case consumed the dishwasher.

The power consumption is updated regularly, every couple of seconds this way. So every couple of seconds this flow runs and gets an updated value of

Next a hysteresis is applied. In simple terms this means: when the value goes above a certain threshold the dishwasher is considered to be running. When it goes below a certain threshold then it is considered finished.

When the dishwasher changed it’s state to “running” the flow will generate a random number between 0 and 1. This give a 50:50 chance for either Steffi or Daniel be the chosen one to clear out the dishwasher for this run. This message is sent out as push notification to all phones, watches and TVs.

When the dishwasher finishes it’s run the total energy consumption is taken and sent out as the “I am done message”. Also this information is added to the calendar. Voilá.

the real smart home has a calendar!

A calendar? Why a calendar you may ask. Oh well there are several reasons. Think of calendars as another way to interact with the house. All sorts of things happen on a timeline. A calendar is only a visual aid to interact with timelines.

May it be a home appliance running and motion being sensed for your home alarm system. All of that can be displayed in a calendar and thus automatically sync to all your devices capable to display this calendar.

And if you start adding entries to a calendar that the house uses to know what to do next… how about putting light on-off times into an actual calendar right on your phone instead of a complicated browser user interface like many of those marketing smart-homes want us to use?

Never confuse wisdom with luck.

44th Rule of Acquisition / Ferengi

can your kitchen scale do this trick? – ESP8266+Load Cell+MQTT

Ever since we had changed our daily diet we started to weigh everything we eat or cook. Like everything.

Quickly we found that those kitchen scale you can cheaply buy are either not offering the convenience we are looking for or regularly running out of power and need battery replacements.

As we already have all sorts of home automation in place anyway the idea was born to integrate en ESP8266 into two of those cheap scales and – while ripping out most of their electronics – base the new scale functionality on the load cells already in the cheap scale.

So one afternoon in January 2018 I sat down and put all the parts together:

ESP8266 + HX711 + 4 Load Cells
my notes of the wiring… this might be different for your load cells…

After the hardware portion I sat down and programmed the firmware of the ESP8266. The simple idea: It should connect to wifi and to the house MQTT broker.

It would then send it’s measures into a /raw topic as well as receive commands (tare, calibration) over a /cmd topic.

Now the next step was to get the display of the measured weights sorted. The idea for this: write a web application that would connect to the MQTT brokers websocket and receive the stream of measurements. It would then add some additional logic like a “tare” button in the web interface as well as a list of recent measurements that can be stored for later use.

the web app. I am not a web designer – help me if you can! ;-)

An additional automation would be that if the tare button is pressed and the weight is bigger than 10g the weight would automatically be added to the measurements list in the web app – no matter which of the tare buttons where used. The tare button in the web app or the physical button on the actual scale. Very practical!

Here’s a short demo of the logic, the scale and the web app in a video:

You can grab the sourcecode for the Arduino ESP8266 firmware as well as the source code for the web application here.

LED projector for your home automation needs

In 2017 Texas Instruments had released a line of cheap industry grade LED projectors meant to be used in production lines and alike:

DLP® LightCrafter Display 2000 is an easy-to-use, plug-and-play evaluation platform for a wide array of ultra-mobile and ultra-portable display applications in consumer, wearables, industrial, medical, and Internet of Things (IoT) markets. The evaluation module (EVM) features the DLP2000 chipset comprised of the DLP2000 .2 nHD DMD, DLPC2607 display controller and DLPA1000 PMIC/LED driver. This EVM comes equipped with a production ready optical engine and processor interface supporting 8/16/24-bit RGB parallel video interface in a small-form factor.

Texas Instruments

And of course this got picked up by the makers. In the hands of people like MickMake who designed an adapter PCB for the RaspberryPi Zero W to the smallest projector available from TI.

After I had learned about the existence of those small projectors I had to get a couple and try for myself. There would be so many immediate and potential applications in our house.

2x DLPLDCR2000EVM with MickMake adapter and RaspberryPi Zero W

After having them delivered I did the first trial with just a breadboard and the Raspberry Pi 3.

first light!

The projector module has a native resolution of 640×360 – so not exactly high-pixel-density. And of course if the image is projected bigger the screen-door effect is quite noticeable. Also it’s not the brightest of images depending on the size. For the usual use-cases the brightness is definitely sufficient.

Downsides

  • too low brightness for large projection size – no daylight projection
  • low resolution is an issue for text and web content – it is not so much of an issue for moving pictures as you might think. Video playback is well usable.
  • flimsy optics that you need to set focus manually – works but there is no automatic focus or alike.

Upsides

  • very low powered – 2.5A/5V USB power supply is sufficient for Pi Zero + Projector on full brightness (30 lumen)
  • low brightness is not always bad – one of our specific use cases requires an as dim as possible image with fine grain control of thr brightness which this projector has.
  • extremely small footprint / size allows to integrate this device into places you would not have thought of.
  • almost fully silent operation – the only moving part that makes a sound is the color wheel inside the DLP module. You have to put your ear right onto it to hear anything.
  • passive cooling sufficient – even at full brightness an added heat sink is enough to dissipate the heat generated by the LED.

So what are these use cases that require such a projector you ask?

Night status display:

For the last 20+ years I am used to sleep with a “night playlist” running. So far a LED TV was used at the lowest brightness possible. Still it was pretty bright. The projector module allows to dim the brightness down to almost “moon brightness” and also allows to adjust the color balance towards the reds. This means: the perfect night projection is possible! And the power consumption is extremely low. A well watchable lowest brightness red-shifted image also means much lower temperatures on the projector module – it’s crazy how low powered, low temperature.

at 75% brightness (camera did not properly focus…)

Season Window Projection:

Because the projector is small, low-powered and bright enough for back-lit projection we tried and succeeded with a Halloween window projection scene the last season.

outside view
inside view

It really looks funky from the outside – funky enough to have several people stop in front of the house and point fingers. All that while power consumption was really

House overall status projections:

When projecting information is that cheap and power efficient it really shines when used to display overall status information like house-alarm status, general switch maps, locations of family members and so on. I’ve left those to your imagination as these kind of status displays are more or less giving away a lot of personal information that isn’t well suited for the internet.

power consumption after the ssd swap

A week after swapping out mechanical hard drives for SSDs it’s time to look at what it meant in the longer run for the power consumption of the server.

15 watt less at least

Depending on what the server is asked to do – high or low cpu load and so on – the power consumption fluctuates but it’s very visible that the averages are about 15 watt lower at all times. Great!

out with the old, in with the new – house gets ssd upgrade

A week ago I had written about another mechanical hard drive that was about to bite the dust in our houses elaborate set-up.

Not having time for a full-day-of-focus I postponed the upgrade to this saturday. With the agreement of the family as they are suffering through the maintenance period as well.

The upgrade would need cautious preparation in order to be doable in one sitting. And this was also meant to be some sort of disaster-recovery-drill. I would restore the house central docker and service infrastructure from scratch along this.

And this would need to happen:

  • all services, zfs pools, docker containers, configurations needed to be double checked for full backup – as this would be used to restore all (ZFS snapshots are just the bomb for these things!)
  • the main central docker server would have to go down
  • get all hard disks ripped out
  • SSDs put in and properly configured
  • get a fresh Ubuntu 18.04 LTS set-up and booting from ZFS on a NVMe SSD (bios update(s)!, secure boot disabling, ahci enabling, m.2 instead of sata express switching…you get the idea)
  • get the network set-up in order: upgrading from Ubuntu 16.04 to 18.04 means ifupdown networking was replaced by netplan. Hurray! Not.
  • get docker-ce and docker-compose ready and set-up and all these funky networkings aligned – figure out in this that there are major issues with IPv6 in docker currently.
  • pull in the small number of still needed mechanical hard disks and import the ZFS pools
  • start the docker builds from the backup (one script \o/)
  • start the docker containers in their required order (one script \o/)

Apart from some hardware/bios related issues and the rather unexpected netplan introduction everything went fairly good. It just takes ages to see data copied.

the “heartbeat” is a general term in our house for busy everything is. It’s an artificial value calculated from sensor inputs/s and actions taken and so on. Good indication if there are issues. During the time of maintenance (organge/red) it hasn’t been updated and was stuck at the pre-given value.

Bandwidth was the only real issue with this disaster recovery. All building blocks seemed to fall into place and no unplanned measure had to be taken. The house systems went partially down at around 12:30 and were back up 10 hours later 22:00. Of course non-automated things like internet kept working and all switches were only manual push-buttons. So everything could be done still but with a lot less convenience.

All in all there are more than 40 vital docker container based services that get started one after the other and interconnect to deliver a full house home automation. With the added SSD performance this whole ship is much much more responsive to activities. And hopefully less prone to mechanical defects.

Backup and Disaster-Preparations showed to be practical and working well. There was no beat missed (except sensor measure values during the 10 hours downtime) and no data lost.

Core i3 with 3.7 Ghz and 32 Gbyte RAM is sufficient and tuned for power consumption

What could be done better: It could be much more straight forward when there were less dependencies on external repositories / docker-hub. Almost all issues that came up with containers where from the fact that the maintainers had just a day before introduced something that kept them from spinning up naturally. Bad luck. But that can be helped! There’s now a multi-page disaster-recovery-procedure document that will be used and updated in the future.

Oh and what speeds am I seeing? The promissed 3 Gbyte/s read and write speeds are real. It’s quite impressive to see 4-digit megabyte/s values in iotop frequently.

I almost forgot! During this exercise I had been in the server room less than 30 minutes. But I was on a warm and nice work-desk set-up I am using in the house as much as I can – and I will tell you about it in another article. But the major feature of this work-desk set-up is that it is (a) a standing desk and (b) has a treadmill under it. Yes. Treadmill.

You will get pictures of the set-up in that mentioned article, but since I had spent more than 10 hours walking on saturday doing the disaster recovery I want to give you a glimpse of what such a set-up means:

46 km while doing disaster recovery successfully.

indoor location tracking with ESP32

This project uses the same approach that I took for my ESP32 based indoor location tracking system (by tracking BLE signal strength). But this project came up with an actual user interface – NICE!

“Indoor positioning of a moving iBeacon, using trilateration and three ESP32 development modules. ESP32 modules report all beacons they see, to MQTT topic. Dashboard subscribes to this topic, and shows the location of beacons which are seen by all three stations.”
(https://github.com/jarkko-hautakorpi/iBeacon-indoor-positioning-demo)

Apple Airplay for SONOS (in Docker)

We’ve got a couple of SONOS based multi-room-audio zones in our house and with the newest generation of SONOS speakers you can get Apple Airplay. Fancy!

But the older hardware does not support Apple Airplay due to it’s limiting hardware. This is too bad.

So once again Docker and OpenSource + Reverse-Engineering come to the rescue.

AirConnect is a small but fancy tool that bridges SONOS and Chromecast to Airplay effortlessly. Just start and be done.

It works a treat and all of a sudden all those SONOS zones become Airplay devices.

There is also a nice dockerized version that I am using.

waking up to another dying hard disk – upgrade time!

At our house I am running a medium-sized operation when it comes to all the storage and in-house / home-automation needs of the family.

This is done by utilizing several products from QNAP, Synology and a custom built server infrastructure that does most of the heavy-lifting using Docker.

This morning I woke up to an eMail stating that one of the mirrored drives in the machine is reporting read-errors.

Since this drive is part of a larger array of spinning-rust style hard disks just replacing it would work but due to the life-time of those drives I am not particularly interested in more replacing in the very near future. So a more general approach seems right.

63083 lifetime hours = 2628 days = 7.2 years powered up

You can see what I mean. This drive is old. Very old. And so are its mates. Actually this is the newest drive of another 6 or so 1.5TB and 1TB drives in this array.

Since this redundant array in fact is still quite small and not fully used as most storage intensive non service-related disk space demands have moved to iSCSI and other means it’s not the case anymore that so many disks, so well redundant with so little disk space are needed anymore. Actual current space utilization seems about 20% of the available 2TB volume.

Time for an upgrade! Taking a look in the manual of the mainboard I had replaced 2 years ago I found that this mainboard does have dual NVMe m.2 ports. From which I can boot according to that same manual.

So I thought: Let’s start with replacing the boot drives and the /var/lib docker portions with something fast.

To my surprise Samsung is building 1 TB NVMe M.2 SSDs to a price I expected to be much higher.

Nice! So let me reeport back when this shipped and I can start the re-set-up of the operating system and docker environment. Which by all fairness should be straight forward. I will upgrade from Ubuntu 16.04 LTS to 18.04 LTS in the same step – and the only more complex things I expect to happen is the boot-from-ZFS(on Linux) and iSCSI set-up of the machine.

If you got any tips or best-practice, let me know.

I just have started the catch-up on what happpened in the last 2 years to ZFS on Linux. My initial decision to use Linux 2 years ago as the main driver OS and Ubuntu as the distribution was based upon the exepectation to not have this as my hobby in the next years. And that expectation was fulfilled by Ubuntu 16.04 LTS.

small and cheap multi-sensor nodes for home automation

I had reported on my efforts to develop an indoor location tracking system previously. Back in 2017 when I started to work on this I only planned to utilize inexpensive EspressIf ESP32 SoCs to look for bluetooth beacons.

In the time between I figured that I could, and should, also utilize the multiple digital and analog input/output pins this specific SoC offers. And what better to utilize it with then a range of sensors that also now could feed their measurements into an MQTT feed along with the bluetooth details.

And there is a whole lot of sensors that I’ve added. On a breadboard it looks like this:

So what do we have here:

  • Motion sensor
  • Temperature sensor
  • Humidity sensor
  • Light sensor
  • Barometric pressure sensor
  • and of course an RGB LED to show a status

The software I’ve done already and after 3 weeks of extensive testing it seems that it’s stable. I will release this eventually later in the process.

I’ve also found plastic cases that fill fit this amount of sensory over the sensor cases I had already bought for the ESP32 alone. For now I’ll close this article with some pictures.

The MQTT feed one of these nodes produces…

…and the Grafana dashboard I am using for this specific prototype device.