a terminal for Windows

As Windows lately tends to make an effort to stay out of the way as an operating system and user-experience it seems that it regains more attention by developers.

For me this all is quite strange as I’ve personally would prefer switching from macOS to Linux rather than Windows.

But for those occasions you need to go with Windows. There’s a Terminal application now that gives you, well, a good terminal. Try FluentTerminal.

EFI boot app in C#

Zero-Sharp is using the CoreRT runtime to very impressively demonstrate how to get down to bare-metal application operation using C#. It compiles programs into native code…

Everything you wanted to know about making C# apps that run on bare metal, but were afraid to ask:

A complete EFI boot application in a single .cs file.

Michal Stehovsky on Twitter

This is seriously impressive and the screenshot says it all:

a very cool “Hello World“.

Video Game History

Have you ever asked yourself what those generations coming after us will know about what was part of our culture when we grew up? As much as computers are a part of my story a bit of gaming also is.

From games on tape to games on floppy disks to CDs to no-media game streaming it has been quite a couple of decades. And with the demise of physical media access to the actual games will become harder for those games never delivered outside of online platforms. Those platforms will die. None of them will remain forever.

Hardware platforms follow the same logic: Today it’s the new hype. Tomorrow the software from yesterday won’t be supported by hardware and/or operating systems. Everything is in constant flux.

Emulation is a great tool for many use-cases. But it probably won’t solve all challenges. Preserving access to software and the knowledge around the required dependencies is the mission of the Video Game History Foundation.

Video game preservation matters because video games matter. Games are deeply ingrained in our culture, and they’re here to stay. They generated an unprecedented $91 billion dollars in revenue in 2016. They’re being collected by the Smithsonian, the Museum of Modern Art, and the Library of Congress. They’ve inspired dozens of feature films and even more books. They’re used as a medium of personal expression, as the means for raising money for charity, as educational tools, and in therapy.
And yet, despite all this, video game history is disappearing. The majority of games that have been created throughout history are no longer easily accessible to study and play. And even when we can play games, that playable code is only a part of the story.

Video Game History Foundation

Miataru – open source location tracking

Not a lot of things are more private than your location.

Yet sometimes you wish to share your location with friends and family. May it be during an event or regularly. Maybe you want to

To allow the tech-minded audience to be in full control of what data is aggregated and stored regarding these needs I’ve created Miataru back in 2013 as an open-source project from end-2-end.

With the protocol being completely open and ready to be integrated into any home automation interested users can either utilize the publicly available (stores-nothing-on-disk) server or host your own.

Everything from the server to the clients is available in source and there’s a ready-to-go version of the client app on the AppStore.

this is a location sharing session when the blue pin met the yellow pin

OpenSource drawing: Krita

I just recently learned about Krita. An open source drawing application that allows you to… oh well… do free-hand drawings.

Krita is a FREE and open source painting tool designed for concept artists, illustrators, matte and texture artists, and the VFX industry. Krita has been in development for over 10 years and has had an explosion in growth recently. It offers many common and innovative features to help the amateur and professional alike. See below for some of the highlighted features.

Krita highlights

Taking a look at the gallery yields that I cannot draw. Frustration about that is limited because there’s so much nice drawings to gaze at!

Also this is a multi-platform application. It’s available for Windows, macOS and for Linux.

pushing your myfitnesspal data to MQTT

MyFitnessPal is a great online service we are using to track what we eat. It’s well integrated into our daily routine – it works!

Unfortunately MyFitnessPal is not well set-up to interface 3rd party applications with it. In fact it appears they are actively trying to make it harder for externals to utilize the data there.

To access your data there’s an open source project called “python-myfitnesspal” which allows you to interface with MyFitnessPal from the command line. This project uses web-scraping to extract the information from the website and will break everytime MyFitnessPal is changing the design/layout.

Since the output for this would be command line text output it is not of great use for a standardized system. What is needed is to have the data sent in a re-useable way into the automation system.

This is why I wrote the additional tool “myfitnesspal2mqtt“. It takes the output provided by python-myfitnesspal and sends it to an MQTT topic. The message then can be decoded, for example with NodeRed, and further processed.

As you can see in the image above I am taking the MQTT message coming from myfitnesspal2mqtt and decoding it with a bit of javascript and outputting it back to MQTT.

var complete = {};
var sodium = {};
var carbohydrates = {};
var calories = {};
var daydate = {};
var fat = {};
var sugar = {};
var protein = {};

var weight = {};
var bodyfat = {};


var goalsodium = {};
var goalcarbohydrates = {};
var goalcalories = {};
var goalfat = {};
var goalsugar = {};
var goalprotein = {};

var caloriesdiff = {};

var ttopic = msg.topic.toLowerCase();

var firstobject = Object.keys(msg.payload)[0];

complete.payload = msg.payload[firstobject].complete;
complete.topic = ttopic+'/complete';

sodium.payload = msg.payload[firstobject].totals.sodium;
sodium.topic = ttopic+'/total/sodium';
carbohydrates.payload = msg.payload[firstobject].totals.carbohydrates;
carbohydrates.topic = ttopic+'/total/carbohydrates';
calories.payload = msg.payload[firstobject].totals.calories;
calories.topic = ttopic+'/total/calories';
fat.payload = msg.payload[firstobject].totals.fat;
fat.topic = ttopic+'/total/fat';
sugar.payload = msg.payload[firstobject].totals.sugar;
sugar.topic = ttopic+'/total/sugar';
protein.payload = msg.payload[firstobject].totals.protein;
protein.topic = ttopic+'/total/protein';

weight.payload = msg.payload[firstobject].measurements.weight;
weight.topic = ttopic+'/measurement/weight';
bodyfat.payload = msg.payload[firstobject].measurements.bodyfat;
bodyfat.topic = ttopic+'/measurement/bodyfat';

goalsodium.payload = msg.payload[firstobject].goals.sodium;
goalsodium.topic = ttopic+'/goal/sodium';
goalcarbohydrates.payload = msg.payload[firstobject].goals.carbohydrates;
goalcarbohydrates.topic = ttopic+'/goal/carbohydrates';
goalcalories.payload = msg.payload[firstobject].goals.calories;
goalcalories.topic = ttopic+'/goal/calories';
goalfat.payload = msg.payload[firstobject].goals.fat;
goalfat.topic = ttopic+'/goal/fat';
goalsugar.payload = msg.payload[firstobject].goals.sugar;
goalsugar.topic = ttopic+'/goal/sugar';
goalprotein.payload = msg.payload[firstobject].goals.protein;
goalprotein.topic = ttopic+'/goal/protein';

caloriesdiff.payload = msg.payload[firstobject].goals.calories - msg.payload[firstobject].totals.calories;
caloriesdiff.topic = ttopic+'/caloriedeficit';

daydate.payload = firstobject;
daydate.topic = ttopic+"/date";

return [complete, sodium, carbohydrates, calories, fat, sugar, protein, weight, bodyfat, goalsodium, goalcarbohydrates, goalcalories, goalfat, goalsugar, goalprotein, daydate, caloriesdiff];

In the end it expands into a multitude of topics with one piece of information per MQTT topic.

And with just that every time the script is run (which I do in a docker container and with a cronjob) the whole lot of pieces of information about nutrition and health stats are being pushed and stored in the home automation system.

This way they are of course also available to the home automation system to do things with it.

Like locking the fridge.

the interesting bit about googles game streaming

In 2012 I’ve experienced streamed game play for the first time. I was a beta-user of the OnLive service which created a bit of fuzz back then.

Last week Google had announced to step into the game streaming business as well. They’ve announce Google Stadia as the Google powered game streaming platform. It would come with it’s own controller.

3 color variants

And this controller is the most interesting bit. We have seen video live streaming. We have seen and played streamed games. But every time we needed some piece of software or hardware that brought screen, controller and player together.

The Google Stadia controllers now do not connect to the screen in front of you. The screen, by all it knows, just shows a low-latency video/audio stream.

The controller connects to your wifi and directly to the game session. Everything you input with the controller will be directly sent to the Google Stadia session in a Google datacenter. No dedicated console hardware in between. And this will make a huge difference. Because all of a sudden the screen only is a screen. And the controller will connect to the “cloud-console” far-far away. As if it was sitting right below the screen. This will make a huge difference!

css font-feature “tnum”

Oh this is so useful for my head-up-display prototype implementation:

This feature replaces numeral glyphs set on glyph-specific (proportional) widths with corresponding glyphs set on uniform (tabular) widths. Note that some fonts may contain tabular figures by default, in which case enabling this feature may not appear to affect the width of glyphs.

tabular figures: tnum

intuitive shell command explanations

You want or you have to use shells – command line interfaces. And it’s something that always leads to stackoverflow / google sessions. Or you’re studying man-pages for hours.

But there’s a better way to view and understand these man-pages. There’s explainshell.com. Here is an example of what it can do:

As you can see it not only takes one command and shows you the meaning/function of a parameter. But it takes complex structured commands and unfolds it for you nicely onto a web page. Even the harder examples:

why I still can’t endure using Android

I own some Android devices as I am actively trying out Android every once in a while when a new version arrives.

While doing so the usability was always what eventually put me off and made me not use it.

This is indicative of my experience:

source: Twitter

Simple things like the scrolling or tapping never worked for me. Of course it worked after the 2nd or 3rd tap. But not as it “should” feel.

I own Google Nexus devices as well as 3rd party android tablets from Asus. The basics never worked.

Why is that?

Twitter Blocklists

My usual twitter use looks like this: I am scrolling through the timeline reading up things and I see an ad. I click block and never again will I see anything from this advertiser. As I’ve written here earlier.

As Twitter is also a place of very disturbing content there are numerous services built around the official block list functionality. One of those services is “Block Together“.

Block Together is designed to reduce the burden of blocking when many accounts are attacking you, or when a few accounts are attacking many people in your community. It uses the Twitter API. If you choose to share your list of blocks, your friends can subscribe to your list so that when you block an account, they block that account automatically. You can also use Block Together without sharing your blocks with anyone.

blocktogether.org

I’ve signed up and apparently this is as easy as it gets when you want to share block lists.

There seem to be more people that use Twitter like I do. For example Volker Weber wrote about his handling of “promoted content”.

My block list on Twitter currently includes 1881 accounts and these are only accounts that put paid promotions without my request into my timeline.

I’ve read that Volker has such a long list as well – maybe it’s worth sharing as Volker is one person I would trust on his decisions for such a list. (vowe is a good mother!)

and then there’s Chrome OS.

I recently wrote about how I am using ThinClients in our house to always have a ready-to-use working environment that get’s shared across different desks and work places.

To complete the zoo of devices I wanted to take the chance and write about another device we’re using when the purpose fits: ChromOS devices.

A little bit over a year ago I was given a HP Chromebook 11 G5 and this little thing is in use ever since.

The hardware itself is very average and works just right. The only two things that could be better are the display and the trackpad. With the trackpad you can help yourself with an external mouse.

The display works for the device size but the resolution being 1366×768 is definitely a limiting factor for some tasks.

What is not a limiting factor, astonishingly, is the operating system. I did not have any expectations at all when I first started using the Chromebook but everything just fell into place as expected. A device with almost no local storage and everything on the google cloud as well as a device that you can simply pick up and start using with just your google account may not sound crazy innovative. But let me tell you: if you start living that thin client, cloud stored life these Chrome OS devices hit the spot perfectly.

Everything updates in the background and as long as you are okay with web based applications or Android based applications you are good to go.

being productive?

Did I miss anything functionwise? Yes. At the beginning there was no real shell or Linux tools available for Chrome OS natively. This has changed.

Chrome OS comes with linux inside and exposed now.

Would I buy another one or do I recommend it and for whom? I would buy another one and I would recommend it for certain audiences.

I would recommend it for anyone who does not need to game anything not available in the Google Playstore – anything that can be done on the web can be done with the Chromebook. And as long as there is not the requirement of anything native or higher-spec that requires you to have “Windows-as-a-hobby” or a beefy MacOS device sitting around I guess these inexpensive Chrome OS devices really have their niche.

For kids – I guess this would make a great “my-first-notebook” as it works when you need it and does not lock you in too much if you wanted to start exploring. But then again: what do I know – I do not have kids.

using calendars to automate your home

When you want to make things happen on a schedule or log them down when they took place a calendar is a good option. Even more so if you are looking for an intuitive way to interact with your home automation system.

Calendars can be shared and your whole family can have them on their phones, tablets and computers to control the house.

In general I am using the Node-Red integration of Google Calendar to send and receive events between Node-Red and Google. I am using the node-red-node-google package which comes with a lot of different options.

Of course when you are using those nodes you need to configure the credentials

Part 1: Control

So you got those light switches scattered around. You got lots of things that can be switched on and off and controlled in all sorts of interesting ways.

And now you want to program a timer when things should happen.

For example: You want to control when a light is being switched on and when it’s then again been switched off.

I did create a separate calendar on google calendar in which I am going to add events to in a notation I came up with: those events have a start-datetime and of course an end-datetime.

When I now create an event with the name “test” in the calendar…

And in Node-Red you would configure the “google calendar in”-Node like so:

When you did wire this correctly everytime an event in this calendar starts you will get a message with all the details of the event, like so:

With this you can now go crazy on the actions. Like using the name to identify the switch to switch. Or the description to add extra information to your flow and actions to be taken. This is now fully flexible. And of course you can control it from your phone if you wanted.

Part 2: Information

So you also may want to have events that happened logged in the calendar rather than a plain logfile. This comes very handy as you can easily see this way for example when people arrived home or left home or when certain long running jobs started/ended.

To achieve this you can use the calendar out nodes for Node-Red and prepare a message using a function node like this:

var event = {
'summary': msg.payload,
'location': msg.location,
'description': msg.payload,
'start': {
'dateTime': msg.starttime,//'2015-05-28T09:00:00-07:00',
'timeZone': 'Europe/Berlin'
},
'end': {
'dateTime': msg.endtime,//'2015-05-28T17:00:00-07:00',
'timeZone': 'Europe/Berlin'
},
'recurrence': [
//'RRULE:FREQ=DAILY;COUNT=2'
],
'attendees': [
//{'email': 'lpage@example.com'},
//{'email': 'sbrin@example.com'}
],
'reminders': {
'useDefault': true,
'overrides': [
//{'method': 'email', 'minutes': 24 * 60},
//{'method': 'popup', 'minutes': 10}
]
}
};
msg.payload = event;
return msg;

And as said – we are using it for all sorts of things – like when the cat uses her litter box, when the washing machine, dryer, dishwasher starts and finishes. Or simply to count how many Nespresso coffees we’ve made. Things like when members of the household arrive and leave places like work or home. When movement is detected or anything out of order or noteable needs to be written down.

And of course it’s convenient as it can be – here’s the view of a recent saturday:

pushing notifications in home automation

I was asked recently how I did enable my home automation to send push notifications to members of the household.

The service I am using on which all of our notification needs are served by is PushOver.

Pushover gives you a simple API and a device management and allows you to trigger notifications with icons and text to be sent to either all or specific devices. It allows to specify a message priority so that more or most important push notifications even are being pushed to the front when your phone is set on do-not-disturb.

The device management and API, as said, is pretty simple and straight forward.

apparently we’re sending a lot of notifications to these devices…

As for the actual integration I am using the NodeRed integration of Pushover. You can find it here: node-red-contrib-pushover.

With the newest client for iOS it even got integration for Apple Watch. So you not only are limited to text and images. You can also send our a state that updates automatically on your watch face.

As Pushover seems consistent in service and bringing updates I don’t miss anything – yet I do not have extensively tested it on Android.

a new header

I had redone the header of this blog a while ago but since I was trying around some things on the template I wanted something more dynamic but without any additional dependencies.

So I searched and found:

Tim Holman did a very nice implementation of this “worm generator” with only using the HTML5 canvas tag and some math. I made some very slight changes and integrated it into the header graphic. It will react to your mouse movement and resets if you click anywhere. Give it a go!

bringing the thinclient back

I had to solve a problem. The problem was that I did not wanted to have the exact same session and screen shared across different work places/locations simultaneously. From looking at the same screen from a different floor to have the option to just walk over to the lab-desk solder some circuits together and have the very same documents opened already and set on the screens over there.

One option was to use a tablet or notebook and carry it around. But this would not solve the need to have the screen content displayed on several screens simultaneously.

Also I did not want to rely on the computing power of a notebook / tablet alone. Of course those would get more powerful over time. But each step would mean I would have to purchase a new one.

Then in a move of desperation I remembered the “old days” when ThinClients used to be the new-kid in town. And then I tried something:

I just recently had moved all house server infrastructure over to Linux and Docker. So what would keep me from utilizing the computing power of that one beefy server in the basement to host all of my desktop needs?

It turns out: Nothing really. Docker is well prepared to host desktop environments. With a bit of tweaking and TigerVNC Xvnc I was able to pre-configure the most current Ubuntu to start my preferred Mate desktop environment in a container and expose it through VNC.

If you wanted to replicate this I would recommend this repository as a starting point.

Even better I found that the RaspberryPi single board computers come with a free pre-licensed and accelerated version of RealVNC.

So I took one of those RaspberryPis, booted up the Raspbian Desktop lite and connected to the dockers VNC port. It all worked just like that.

this is the RaspberryPi client with the windowed docker container VNC session

The screenshot above holds an additional information for you. I wanted sound! Video works smooth up to a certain size of the moving video – after all those RaspberryPis only come with sub Gbit/s wired networking. But to get sound working I had to add some additional steps.

First on the RaspberryPI that you want to output the sound to the speakers you need to install and set-up pulseaudio + paprefs. When you configure it to accept audio over the network you can then configure the client to do so.

In the docker container a simple command would then redirect all audio to the network:

pax11publish -e -S thinclient

Just replace “thinclient” with the ip or hostname of your RaspberryPI. After a restart Chrome started to play audio across the network through the speakers of the ThinClient.

Now all my screens got those RaspberryPIs attached to them and with Docker I can even run as many desktop environments in parallel as I wish. And because VNC does not care about how many connections there are made to one session it means that I can have all workplaces across the house connected to the same screen seeing the same content at the same time.

And yes: The UI and overall feel is silky smooth. And since VNC adapts to some extend to the available bandwidth by changing the quality of the image even across the internet the VNC sessions are very much useable. Given that there’s only 1 port for video and 1 port for audio it’s even possible to tunnel those sessions across to anywhere you might need them.

testing video

One thing I cannot do without linking to external sources or having control over the content storage is to have videos here on the pages.

There are a couple of options to achieve this and I am evaluating some of them right now. The goal is very clear:

  • no external links
  • no external resources embedded or included
  • 720p/1080p/2160p quality levels, ideally with bandwidth scaling

So let’s see some options tried out:

720p
1080p
2160p

Apple Health challenges are broken

We are using Apples smartwatch to measure some health stats during our workouts. And Apple Watch is doing a great job at that.

With all that polish one would expect better from what Apple has to offer in the software department…

Apple Watch has monthly challenges that get automatically generated from previous measurements. But seeing that an already much above average activities number would have to be doubled to complete the challenge is absurd. To a degree where challenges are arguably health risks…

can your kitchen scale do this trick? – ESP8266+Load Cell+MQTT

Ever since we had changed our daily diet we started to weigh everything we eat or cook. Like everything.

Quickly we found that those kitchen scale you can cheaply buy are either not offering the convenience we are looking for or regularly running out of power and need battery replacements.

As we already have all sorts of home automation in place anyway the idea was born to integrate en ESP8266 into two of those cheap scales and – while ripping out most of their electronics – base the new scale functionality on the load cells already in the cheap scale.

So one afternoon in January 2018 I sat down and put all the parts together:

ESP8266 + HX711 + 4 Load Cells
my notes of the wiring… this might be different for your load cells…

After the hardware portion I sat down and programmed the firmware of the ESP8266. The simple idea: It should connect to wifi and to the house MQTT broker.

It would then send it’s measures into a /raw topic as well as receive commands (tare, calibration) over a /cmd topic.

Now the next step was to get the display of the measured weights sorted. The idea for this: write a web application that would connect to the MQTT brokers websocket and receive the stream of measurements. It would then add some additional logic like a “tare” button in the web interface as well as a list of recent measurements that can be stored for later use.

the web app. I am not a web designer – help me if you can! ;-)

An additional automation would be that if the tare button is pressed and the weight is bigger than 10g the weight would automatically be added to the measurements list in the web app – no matter which of the tare buttons where used. The tare button in the web app or the physical button on the actual scale. Very practical!

Here’s a short demo of the logic, the scale and the web app in a video:

You can grab the sourcecode for the Arduino ESP8266 firmware as well as the source code for the web application here.

fonts for your programming needs

We are looking at our screens more and more time of the day and most of that time we are reading or writing text. Text needs to look pretty for our eyes not to get sore – apart from the obvious “being able to tell what letter that is” there is a big portion of personal taste and preference when it comes to the choice of the font.

Most of the texts I am writing benefit from monospaced fonts.

This blog celebrates monospaced fonts for programming.
So many fonts have popped up in recent years.

programmingfonts.org/about

Of course there’s a nice page available that previews the fonts right in your browser:

LED projector for your home automation needs

In 2017 Texas Instruments had released a line of cheap industry grade LED projectors meant to be used in production lines and alike:

DLP® LightCrafter Display 2000 is an easy-to-use, plug-and-play evaluation platform for a wide array of ultra-mobile and ultra-portable display applications in consumer, wearables, industrial, medical, and Internet of Things (IoT) markets. The evaluation module (EVM) features the DLP2000 chipset comprised of the DLP2000 .2 nHD DMD, DLPC2607 display controller and DLPA1000 PMIC/LED driver. This EVM comes equipped with a production ready optical engine and processor interface supporting 8/16/24-bit RGB parallel video interface in a small-form factor.

Texas Instruments

And of course this got picked up by the makers. In the hands of people like MickMake who designed an adapter PCB for the RaspberryPi Zero W to the smallest projector available from TI.

After I had learned about the existence of those small projectors I had to get a couple and try for myself. There would be so many immediate and potential applications in our house.

2x DLPLDCR2000EVM with MickMake adapter and RaspberryPi Zero W

After having them delivered I did the first trial with just a breadboard and the Raspberry Pi 3.

first light!

The projector module has a native resolution of 640×360 – so not exactly high-pixel-density. And of course if the image is projected bigger the screen-door effect is quite noticeable. Also it’s not the brightest of images depending on the size. For the usual use-cases the brightness is definitely sufficient.

Downsides

  • too low brightness for large projection size – no daylight projection
  • low resolution is an issue for text and web content – it is not so much of an issue for moving pictures as you might think. Video playback is well usable.
  • flimsy optics that you need to set focus manually – works but there is no automatic focus or alike.

Upsides

  • very low powered – 2.5A/5V USB power supply is sufficient for Pi Zero + Projector on full brightness (30 lumen)
  • low brightness is not always bad – one of our specific use cases requires an as dim as possible image with fine grain control of thr brightness which this projector has.
  • extremely small footprint / size allows to integrate this device into places you would not have thought of.
  • almost fully silent operation – the only moving part that makes a sound is the color wheel inside the DLP module. You have to put your ear right onto it to hear anything.
  • passive cooling sufficient – even at full brightness an added heat sink is enough to dissipate the heat generated by the LED.

So what are these use cases that require such a projector you ask?

Night status display:

For the last 20+ years I am used to sleep with a “night playlist” running. So far a LED TV was used at the lowest brightness possible. Still it was pretty bright. The projector module allows to dim the brightness down to almost “moon brightness” and also allows to adjust the color balance towards the reds. This means: the perfect night projection is possible! And the power consumption is extremely low. A well watchable lowest brightness red-shifted image also means much lower temperatures on the projector module – it’s crazy how low powered, low temperature.

at 75% brightness (camera did not properly focus…)

Season Window Projection:

Because the projector is small, low-powered and bright enough for back-lit projection we tried and succeeded with a Halloween window projection scene the last season.

outside view
inside view

It really looks funky from the outside – funky enough to have several people stop in front of the house and point fingers. All that while power consumption was really

House overall status projections:

When projecting information is that cheap and power efficient it really shines when used to display overall status information like house-alarm status, general switch maps, locations of family members and so on. I’ve left those to your imagination as these kind of status displays are more or less giving away a lot of personal information that isn’t well suited for the internet.

Head Up Display esthetics

Many cars these days come with head up displays. These kind of displays are used to make information like the current speed appear “floating” over the street ahead right in your field of vision.

This has the clear advantage that the driver can stay focused on the street rather than looking away from the street and to the speedometer.

As practical as it seems these displays are not easy to build and seemingly not easy to design. Every time I came across one it’s built-in functionalities where limited in a way that I only can assume not a lot of thought had gone into what exactly would the driver like to see and how that would be displayed. There was always so much left to desire.

Apparently the technology behind these HUDs is at a point where it’s quite affordable to start playing with some ideas to retrofit a car with a more personal and likeable version.

So I started to take a look at what is available – smart phones have bright displays and I had never tried to see what happens when you try to utilize them to project information into the windshield. So I tried.

As you can see – bright enough, readable but hazy and not perfectly sharp. The reason is quite simple:

“In the special windshield normally used, the transparent plastic safety material sandwiched in between the two pieces of glass must have a slight and very precise wedge, so that the vehicle operator does not see a HUD double image.”

laserfocusworld

There are some retrofit adhesive film solutions available that claim to help with that. I have not tried any yet. To be honest: to my eye the difference is noticeable but not a deal-breaker.

So I’ve tried apps available. They work. But they do a lot of things different from how I would have expected or done them. They are bearable, but I think it could be done better.

tldr: I started prototyping away and made a list of things that need to be done about the existing HUD applications.

mirrored basic html prototype, not well adjusted, just to play…

Here’s my list of what I want to achieve:

  • display orientation according to driving direction – I had expected all HUD applications to do this. They know the driving direction. They know how the device is oriented in space. They can tell which direction the windshield is. They know how to correctly turn the screen. They do not do that. None of them.
  • fonts and numbers – I cannot stand the numbers jumping around when they change up and down
  • speed steps interpolation – GPS only delivers a speed update every second or so. In this time speed might jump up and down by more than +1. The display has 60 fps and gyros to play with and interpolate… I want smooth number transitions.
  • have an “eco-meter” – using gyros the HUD would be able to display harsh accelleration and breaking. Maybe display a color-coded bar and whatever is measured is reflected in the bar going left or right…
  • speed-limit display – apparently this is a huge issue looking at the data availability. There seems to be open-street-map data and options to contribute. Maybe that can be added.
  • have a non-hud mode – non mirrored to use for example to set speed limits and contribute to OpenStreetMap this way!
  • automatically switch between HUD and non-HUD mode – because the device knows it’s orientation in space – if you pick it up from the dashboard and look into it, why not automatically switch?
  • speed zones color coding – change the color of the speed display depending on configurable speed regions. 0-80 is green, 80-130 is yellow, 130-250 is red.
  • turn display off when car stopped – if there’s nothing displayed or needs to be displayed, for example because the car stopped the display can be turned off completely on it’s own.

Navigation is of limited value as the only way I could think of adding value would be a serious AR solution that uses the whole windshield. Now I’ve got these small low-power projectors around… that get’s me thinking…

What would you want to have in such a HUD in your car?

out with the old, in with the new – house gets ssd upgrade

A week ago I had written about another mechanical hard drive that was about to bite the dust in our houses elaborate set-up.

Not having time for a full-day-of-focus I postponed the upgrade to this saturday. With the agreement of the family as they are suffering through the maintenance period as well.

The upgrade would need cautious preparation in order to be doable in one sitting. And this was also meant to be some sort of disaster-recovery-drill. I would restore the house central docker and service infrastructure from scratch along this.

And this would need to happen:

  • all services, zfs pools, docker containers, configurations needed to be double checked for full backup – as this would be used to restore all (ZFS snapshots are just the bomb for these things!)
  • the main central docker server would have to go down
  • get all hard disks ripped out
  • SSDs put in and properly configured
  • get a fresh Ubuntu 18.04 LTS set-up and booting from ZFS on a NVMe SSD (bios update(s)!, secure boot disabling, ahci enabling, m.2 instead of sata express switching…you get the idea)
  • get the network set-up in order: upgrading from Ubuntu 16.04 to 18.04 means ifupdown networking was replaced by netplan. Hurray! Not.
  • get docker-ce and docker-compose ready and set-up and all these funky networkings aligned – figure out in this that there are major issues with IPv6 in docker currently.
  • pull in the small number of still needed mechanical hard disks and import the ZFS pools
  • start the docker builds from the backup (one script \o/)
  • start the docker containers in their required order (one script \o/)

Apart from some hardware/bios related issues and the rather unexpected netplan introduction everything went fairly good. It just takes ages to see data copied.

the “heartbeat” is a general term in our house for busy everything is. It’s an artificial value calculated from sensor inputs/s and actions taken and so on. Good indication if there are issues. During the time of maintenance (organge/red) it hasn’t been updated and was stuck at the pre-given value.

Bandwidth was the only real issue with this disaster recovery. All building blocks seemed to fall into place and no unplanned measure had to be taken. The house systems went partially down at around 12:30 and were back up 10 hours later 22:00. Of course non-automated things like internet kept working and all switches were only manual push-buttons. So everything could be done still but with a lot less convenience.

All in all there are more than 40 vital docker container based services that get started one after the other and interconnect to deliver a full house home automation. With the added SSD performance this whole ship is much much more responsive to activities. And hopefully less prone to mechanical defects.

Backup and Disaster-Preparations showed to be practical and working well. There was no beat missed (except sensor measure values during the 10 hours downtime) and no data lost.

Core i3 with 3.7 Ghz and 32 Gbyte RAM is sufficient and tuned for power consumption

What could be done better: It could be much more straight forward when there were less dependencies on external repositories / docker-hub. Almost all issues that came up with containers where from the fact that the maintainers had just a day before introduced something that kept them from spinning up naturally. Bad luck. But that can be helped! There’s now a multi-page disaster-recovery-procedure document that will be used and updated in the future.

Oh and what speeds am I seeing? The promissed 3 Gbyte/s read and write speeds are real. It’s quite impressive to see 4-digit megabyte/s values in iotop frequently.

I almost forgot! During this exercise I had been in the server room less than 30 minutes. But I was on a warm and nice work-desk set-up I am using in the house as much as I can – and I will tell you about it in another article. But the major feature of this work-desk set-up is that it is (a) a standing desk and (b) has a treadmill under it. Yes. Treadmill.

You will get pictures of the set-up in that mentioned article, but since I had spent more than 10 hours walking on saturday doing the disaster recovery I want to give you a glimpse of what such a set-up means:

46 km while doing disaster recovery successfully.

indoor location tracking with ESP32

This project uses the same approach that I took for my ESP32 based indoor location tracking system (by tracking BLE signal strength). But this project came up with an actual user interface – NICE!

“Indoor positioning of a moving iBeacon, using trilateration and three ESP32 development modules. ESP32 modules report all beacons they see, to MQTT topic. Dashboard subscribes to this topic, and shows the location of beacons which are seen by all three stations.”
(https://github.com/jarkko-hautakorpi/iBeacon-indoor-positioning-demo)

Apple Airplay for SONOS (in Docker)

We’ve got a couple of SONOS based multi-room-audio zones in our house and with the newest generation of SONOS speakers you can get Apple Airplay. Fancy!

But the older hardware does not support Apple Airplay due to it’s limiting hardware. This is too bad.

So once again Docker and OpenSource + Reverse-Engineering come to the rescue.

AirConnect is a small but fancy tool that bridges SONOS and Chromecast to Airplay effortlessly. Just start and be done.

It works a treat and all of a sudden all those SONOS zones become Airplay devices.

There is also a nice dockerized version that I am using.

waking up to another dying hard disk – upgrade time!

At our house I am running a medium-sized operation when it comes to all the storage and in-house / home-automation needs of the family.

This is done by utilizing several products from QNAP, Synology and a custom built server infrastructure that does most of the heavy-lifting using Docker.

This morning I woke up to an eMail stating that one of the mirrored drives in the machine is reporting read-errors.

Since this drive is part of a larger array of spinning-rust style hard disks just replacing it would work but due to the life-time of those drives I am not particularly interested in more replacing in the very near future. So a more general approach seems right.

63083 lifetime hours = 2628 days = 7.2 years powered up

You can see what I mean. This drive is old. Very old. And so are its mates. Actually this is the newest drive of another 6 or so 1.5TB and 1TB drives in this array.

Since this redundant array in fact is still quite small and not fully used as most storage intensive non service-related disk space demands have moved to iSCSI and other means it’s not the case anymore that so many disks, so well redundant with so little disk space are needed anymore. Actual current space utilization seems about 20% of the available 2TB volume.

Time for an upgrade! Taking a look in the manual of the mainboard I had replaced 2 years ago I found that this mainboard does have dual NVMe m.2 ports. From which I can boot according to that same manual.

So I thought: Let’s start with replacing the boot drives and the /var/lib docker portions with something fast.

To my surprise Samsung is building 1 TB NVMe M.2 SSDs to a price I expected to be much higher.

Nice! So let me reeport back when this shipped and I can start the re-set-up of the operating system and docker environment. Which by all fairness should be straight forward. I will upgrade from Ubuntu 16.04 LTS to 18.04 LTS in the same step – and the only more complex things I expect to happen is the boot-from-ZFS(on Linux) and iSCSI set-up of the machine.

If you got any tips or best-practice, let me know.

I just have started the catch-up on what happpened in the last 2 years to ZFS on Linux. My initial decision to use Linux 2 years ago as the main driver OS and Ubuntu as the distribution was based upon the exepectation to not have this as my hobby in the next years. And that expectation was fulfilled by Ubuntu 16.04 LTS.

small and cheap multi-sensor nodes for home automation

I had reported on my efforts to develop an indoor location tracking system previously. Back in 2017 when I started to work on this I only planned to utilize inexpensive EspressIf ESP32 SoCs to look for bluetooth beacons.

In the time between I figured that I could, and should, also utilize the multiple digital and analog input/output pins this specific SoC offers. And what better to utilize it with then a range of sensors that also now could feed their measurements into an MQTT feed along with the bluetooth details.

And there is a whole lot of sensors that I’ve added. On a breadboard it looks like this:

So what do we have here:

  • Motion sensor
  • Temperature sensor
  • Humidity sensor
  • Light sensor
  • Barometric pressure sensor
  • and of course an RGB LED to show a status

The software I’ve done already and after 3 weeks of extensive testing it seems that it’s stable. I will release this eventually later in the process.

I’ve also found plastic cases that fill fit this amount of sensory over the sensor cases I had already bought for the ESP32 alone. For now I’ll close this article with some pictures.

The MQTT feed one of these nodes produces…

…and the Grafana dashboard I am using for this specific prototype device.

progressive web applications

These days even heise online is writing up about the wonders of PWA (progressive web applications).

PWA simply put is a standardized way to add some context to websites and package them up so they behave as much like a native mobile application. A mobile application that you are used to install onto your phone or tablet most likely using an app store of some sort.

The aim of PWA is to provide a framework and tooling so that the website is able to provide features like push notifications, background updates, offline modes and so on.

Very neat. I’ve just today have enabled the PWA mode of this website, so you’re now free to add it to your home screen. But fear not: You won’t be pestered with push notifications or any background stuff taking place. It’s merely a more convenient optional shortcut.

Converting ひらがな to “hiragana” and カタカナ to “katakana” – Romaji command line tool

I had this strange problem that my car was not able to display japanese characters when confronted with them. Oh the marvels of inserting a USB stick into a car from 2009.

stupid BMW media player without proper font

Now there’s no real option I know of without risking to brick the car / entertainment system of the car to get it to display the characters right.

Needless so say that my wifes car does the trick easily – of course it’s an asian car!

Anyway. I wrote a command line tool using some awesome pre-made libraries to convert Hiragana-Katakana characters to their romaji counterpart. 

You can find it on github: https://github.com/bietiekay/romaji 

“making your home smarter” – use case #12 – How much time do I have until…?

Did you notice that most calendars and timers are missing an important feature. Some information that I personally find most interesting to have readily available.

It’s the information about how much time is left until the next appointment is coming up. Even smartwatches, which should should be jack-of-all-trades in regards of time and schedule, do not display the “time until the next event”.

Now I came across this shortcoming when I started to look for this information. No digital assistant can tell me right away how much time until a certain event is left.

But the connected house also is based upon open technologies, so one can add these kind of features easily ourselves. My major use cases for this are (a) focussed work, plan quick work-out breaks and of course making sure there’s enough time left to actually get enough sleep.

As you can see in the picture attached my watch will always show me the hours (or minutes) left until the next event. I use separate calendars for separate displays – so there’s actually one for when I plan to get up and do work-outs.

Having the hours left until something is supposed to happen at a glance – and of course being able to verbally ask through chat or voice in any room of the house how long until the next appointment gives peace of mind :-).

 

“making your home smarter”, use case #8 – it’s all about the power consumption

Weekend is laundry time! The smart house knows and sends out notifications when the washing machine or the laundry dryer are done with their job and can be cleared.

Of course this can all be extended with more sensory data, like power consumption measurements at the actual sockets to filter out specific devices much more accurate. But for simple notification-alerting it’s apparently sufficient to monitor just at the houses central power distribution rack.

On the sides this kind of monitoring and pattern-matching is also useful to identify devices going bad. Think of monitoring the power consumption of a fridge. When it’s compressor goes bad it’s going to consume an increasing amount of power over time. You would figure out the malfunction before it happens.

Same for all sorts of pumps (water, oil, aquarium,…).

All this monitoring and pattern matching the smart house does so it’s inhabitants don’t have to.