a new header

I had redone the header of this blog a while ago but since I was trying around some things on the template I wanted something more dynamic but without any additional dependencies.

So I searched and found:

Tim Holman did a very nice implementation of this “worm generator” with only using the HTML5 canvas tag and some math. I made some very slight changes and integrated it into the header graphic. It will react to your mouse movement and resets if you click anywhere. Give it a go!

bringing the thinclient back

I had to solve a problem. The problem was that I did not wanted to have the exact same session and screen shared across different work places/locations simultaneously. From looking at the same screen from a different floor to have the option to just walk over to the lab-desk solder some circuits together and have the very same documents opened already and set on the screens over there.

One option was to use a tablet or notebook and carry it around. But this would not solve the need to have the screen content displayed on several screens simultaneously.

Also I did not want to rely on the computing power of a notebook / tablet alone. Of course those would get more powerful over time. But each step would mean I would have to purchase a new one.

Then in a move of desperation I remembered the “old days” when ThinClients used to be the new-kid in town. And then I tried something:

I just recently had moved all house server infrastructure over to Linux and Docker. So what would keep me from utilizing the computing power of that one beefy server in the basement to host all of my desktop needs?

It turns out: Nothing really. Docker is well prepared to host desktop environments. With a bit of tweaking and TigerVNC Xvnc I was able to pre-configure the most current Ubuntu to start my preferred Mate desktop environment in a container and expose it through VNC.

If you wanted to replicate this I would recommend this repository as a starting point.

Even better I found that the RaspberryPi single board computers come with a free pre-licensed and accelerated version of RealVNC.

So I took one of those RaspberryPis, booted up the Raspbian Desktop lite and connected to the dockers VNC port. It all worked just like that.

this is the RaspberryPi client with the windowed docker container VNC session

The screenshot above holds an additional information for you. I wanted sound! Video works smooth up to a certain size of the moving video – after all those RaspberryPis only come with sub Gbit/s wired networking. But to get sound working I had to add some additional steps.

First on the RaspberryPI that you want to output the sound to the speakers you need to install and set-up pulseaudio + paprefs. When you configure it to accept audio over the network you can then configure the client to do so.

In the docker container a simple command would then redirect all audio to the network:

pax11publish -e -S thinclient

Just replace “thinclient” with the ip or hostname of your RaspberryPI. After a restart Chrome started to play audio across the network through the speakers of the ThinClient.

Now all my screens got those RaspberryPIs attached to them and with Docker I can even run as many desktop environments in parallel as I wish. And because VNC does not care about how many connections there are made to one session it means that I can have all workplaces across the house connected to the same screen seeing the same content at the same time.

And yes: The UI and overall feel is silky smooth. And since VNC adapts to some extend to the available bandwidth by changing the quality of the image even across the internet the VNC sessions are very much useable. Given that there’s only 1 port for video and 1 port for audio it’s even possible to tunnel those sessions across to anywhere you might need them.

testing video

One thing I cannot do without linking to external sources or having control over the content storage is to have videos here on the pages.

There are a couple of options to achieve this and I am evaluating some of them right now. The goal is very clear:

  • no external links
  • no external resources embedded or included
  • 720p/1080p/2160p quality levels, ideally with bandwidth scaling

So let’s see some options tried out:

720p
1080p
2160p

Apple Health challenges are broken

We are using Apples smartwatch to measure some health stats during our workouts. And Apple Watch is doing a great job at that.

With all that polish one would expect better from what Apple has to offer in the software department…

Apple Watch has monthly challenges that get automatically generated from previous measurements. But seeing that an already much above average activities number would have to be doubled to complete the challenge is absurd. To a degree where challenges are arguably health risks…

can your kitchen scale do this trick? – ESP8266+Load Cell+MQTT

Ever since we had changed our daily diet we started to weigh everything we eat or cook. Like everything.

Quickly we found that those kitchen scale you can cheaply buy are either not offering the convenience we are looking for or regularly running out of power and need battery replacements.

As we already have all sorts of home automation in place anyway the idea was born to integrate en ESP8266 into two of those cheap scales and – while ripping out most of their electronics – base the new scale functionality on the load cells already in the cheap scale.

So one afternoon in January 2018 I sat down and put all the parts together:

ESP8266 + HX711 + 4 Load Cells
my notes of the wiring… this might be different for your load cells…

After the hardware portion I sat down and programmed the firmware of the ESP8266. The simple idea: It should connect to wifi and to the house MQTT broker.

It would then send it’s measures into a /raw topic as well as receive commands (tare, calibration) over a /cmd topic.

Now the next step was to get the display of the measured weights sorted. The idea for this: write a web application that would connect to the MQTT brokers websocket and receive the stream of measurements. It would then add some additional logic like a “tare” button in the web interface as well as a list of recent measurements that can be stored for later use.

the web app. I am not a web designer – help me if you can! ;-)

An additional automation would be that if the tare button is pressed and the weight is bigger than 10g the weight would automatically be added to the measurements list in the web app – no matter which of the tare buttons where used. The tare button in the web app or the physical button on the actual scale. Very practical!

Here’s a short demo of the logic, the scale and the web app in a video:

You can grab the sourcecode for the Arduino ESP8266 firmware as well as the source code for the web application here.

fonts for your programming needs

We are looking at our screens more and more time of the day and most of that time we are reading or writing text. Text needs to look pretty for our eyes not to get sore – apart from the obvious “being able to tell what letter that is” there is a big portion of personal taste and preference when it comes to the choice of the font.

Most of the texts I am writing benefit from monospaced fonts.

This blog celebrates monospaced fonts for programming.
So many fonts have popped up in recent years.

programmingfonts.org/about

Of course there’s a nice page available that previews the fonts right in your browser:

LED projector for your home automation needs

In 2017 Texas Instruments had released a line of cheap industry grade LED projectors meant to be used in production lines and alike:

DLP® LightCrafter Display 2000 is an easy-to-use, plug-and-play evaluation platform for a wide array of ultra-mobile and ultra-portable display applications in consumer, wearables, industrial, medical, and Internet of Things (IoT) markets. The evaluation module (EVM) features the DLP2000 chipset comprised of the DLP2000 .2 nHD DMD, DLPC2607 display controller and DLPA1000 PMIC/LED driver. This EVM comes equipped with a production ready optical engine and processor interface supporting 8/16/24-bit RGB parallel video interface in a small-form factor.

Texas Instruments

And of course this got picked up by the makers. In the hands of people like MickMake who designed an adapter PCB for the RaspberryPi Zero W to the smallest projector available from TI.

After I had learned about the existence of those small projectors I had to get a couple and try for myself. There would be so many immediate and potential applications in our house.

2x DLPLDCR2000EVM with MickMake adapter and RaspberryPi Zero W

After having them delivered I did the first trial with just a breadboard and the Raspberry Pi 3.

first light!

The projector module has a native resolution of 640×360 – so not exactly high-pixel-density. And of course if the image is projected bigger the screen-door effect is quite noticeable. Also it’s not the brightest of images depending on the size. For the usual use-cases the brightness is definitely sufficient.

Downsides

  • too low brightness for large projection size – no daylight projection
  • low resolution is an issue for text and web content – it is not so much of an issue for moving pictures as you might think. Video playback is well usable.
  • flimsy optics that you need to set focus manually – works but there is no automatic focus or alike.

Upsides

  • very low powered – 2.5A/5V USB power supply is sufficient for Pi Zero + Projector on full brightness (30 lumen)
  • low brightness is not always bad – one of our specific use cases requires an as dim as possible image with fine grain control of thr brightness which this projector has.
  • extremely small footprint / size allows to integrate this device into places you would not have thought of.
  • almost fully silent operation – the only moving part that makes a sound is the color wheel inside the DLP module. You have to put your ear right onto it to hear anything.
  • passive cooling sufficient – even at full brightness an added heat sink is enough to dissipate the heat generated by the LED.

So what are these use cases that require such a projector you ask?

Night status display:

For the last 20+ years I am used to sleep with a “night playlist” running. So far a LED TV was used at the lowest brightness possible. Still it was pretty bright. The projector module allows to dim the brightness down to almost “moon brightness” and also allows to adjust the color balance towards the reds. This means: the perfect night projection is possible! And the power consumption is extremely low. A well watchable lowest brightness red-shifted image also means much lower temperatures on the projector module – it’s crazy how low powered, low temperature.

at 75% brightness (camera did not properly focus…)

Season Window Projection:

Because the projector is small, low-powered and bright enough for back-lit projection we tried and succeeded with a Halloween window projection scene the last season.

outside view
inside view

It really looks funky from the outside – funky enough to have several people stop in front of the house and point fingers. All that while power consumption was really

House overall status projections:

When projecting information is that cheap and power efficient it really shines when used to display overall status information like house-alarm status, general switch maps, locations of family members and so on. I’ve left those to your imagination as these kind of status displays are more or less giving away a lot of personal information that isn’t well suited for the internet.

Head Up Display esthetics

Many cars these days come with head up displays. These kind of displays are used to make information like the current speed appear “floating” over the street ahead right in your field of vision.

This has the clear advantage that the driver can stay focused on the street rather than looking away from the street and to the speedometer.

As practical as it seems these displays are not easy to build and seemingly not easy to design. Every time I came across one it’s built-in functionalities where limited in a way that I only can assume not a lot of thought had gone into what exactly would the driver like to see and how that would be displayed. There was always so much left to desire.

Apparently the technology behind these HUDs is at a point where it’s quite affordable to start playing with some ideas to retrofit a car with a more personal and likeable version.

So I started to take a look at what is available – smart phones have bright displays and I had never tried to see what happens when you try to utilize them to project information into the windshield. So I tried.

As you can see – bright enough, readable but hazy and not perfectly sharp. The reason is quite simple:

“In the special windshield normally used, the transparent plastic safety material sandwiched in between the two pieces of glass must have a slight and very precise wedge, so that the vehicle operator does not see a HUD double image.”

laserfocusworld

There are some retrofit adhesive film solutions available that claim to help with that. I have not tried any yet. To be honest: to my eye the difference is noticeable but not a deal-breaker.

So I’ve tried apps available. They work. But they do a lot of things different from how I would have expected or done them. They are bearable, but I think it could be done better.

tldr: I started prototyping away and made a list of things that need to be done about the existing HUD applications.

mirrored basic html prototype, not well adjusted, just to play…

Here’s my list of what I want to achieve:

  • display orientation according to driving direction – I had expected all HUD applications to do this. They know the driving direction. They know how the device is oriented in space. They can tell which direction the windshield is. They know how to correctly turn the screen. They do not do that. None of them.
  • fonts and numbers – I cannot stand the numbers jumping around when they change up and down
  • speed steps interpolation – GPS only delivers a speed update every second or so. In this time speed might jump up and down by more than +1. The display has 60 fps and gyros to play with and interpolate… I want smooth number transitions.
  • have an “eco-meter” – using gyros the HUD would be able to display harsh accelleration and breaking. Maybe display a color-coded bar and whatever is measured is reflected in the bar going left or right…
  • speed-limit display – apparently this is a huge issue looking at the data availability. There seems to be open-street-map data and options to contribute. Maybe that can be added.
  • have a non-hud mode – non mirrored to use for example to set speed limits and contribute to OpenStreetMap this way!
  • automatically switch between HUD and non-HUD mode – because the device knows it’s orientation in space – if you pick it up from the dashboard and look into it, why not automatically switch?
  • speed zones color coding – change the color of the speed display depending on configurable speed regions. 0-80 is green, 80-130 is yellow, 130-250 is red.
  • turn display off when car stopped – if there’s nothing displayed or needs to be displayed, for example because the car stopped the display can be turned off completely on it’s own.

Navigation is of limited value as the only way I could think of adding value would be a serious AR solution that uses the whole windshield. Now I’ve got these small low-power projectors around… that get’s me thinking…

What would you want to have in such a HUD in your car?

out with the old, in with the new – house gets ssd upgrade

A week ago I had written about another mechanical hard drive that was about to bite the dust in our houses elaborate set-up.

Not having time for a full-day-of-focus I postponed the upgrade to this saturday. With the agreement of the family as they are suffering through the maintenance period as well.

The upgrade would need cautious preparation in order to be doable in one sitting. And this was also meant to be some sort of disaster-recovery-drill. I would restore the house central docker and service infrastructure from scratch along this.

And this would need to happen:

  • all services, zfs pools, docker containers, configurations needed to be double checked for full backup – as this would be used to restore all (ZFS snapshots are just the bomb for these things!)
  • the main central docker server would have to go down
  • get all hard disks ripped out
  • SSDs put in and properly configured
  • get a fresh Ubuntu 18.04 LTS set-up and booting from ZFS on a NVMe SSD (bios update(s)!, secure boot disabling, ahci enabling, m.2 instead of sata express switching…you get the idea)
  • get the network set-up in order: upgrading from Ubuntu 16.04 to 18.04 means ifupdown networking was replaced by netplan. Hurray! Not.
  • get docker-ce and docker-compose ready and set-up and all these funky networkings aligned – figure out in this that there are major issues with IPv6 in docker currently.
  • pull in the small number of still needed mechanical hard disks and import the ZFS pools
  • start the docker builds from the backup (one script \o/)
  • start the docker containers in their required order (one script \o/)

Apart from some hardware/bios related issues and the rather unexpected netplan introduction everything went fairly good. It just takes ages to see data copied.

the “heartbeat” is a general term in our house for busy everything is. It’s an artificial value calculated from sensor inputs/s and actions taken and so on. Good indication if there are issues. During the time of maintenance (organge/red) it hasn’t been updated and was stuck at the pre-given value.

Bandwidth was the only real issue with this disaster recovery. All building blocks seemed to fall into place and no unplanned measure had to be taken. The house systems went partially down at around 12:30 and were back up 10 hours later 22:00. Of course non-automated things like internet kept working and all switches were only manual push-buttons. So everything could be done still but with a lot less convenience.

All in all there are more than 40 vital docker container based services that get started one after the other and interconnect to deliver a full house home automation. With the added SSD performance this whole ship is much much more responsive to activities. And hopefully less prone to mechanical defects.

Backup and Disaster-Preparations showed to be practical and working well. There was no beat missed (except sensor measure values during the 10 hours downtime) and no data lost.

Core i3 with 3.7 Ghz and 32 Gbyte RAM is sufficient and tuned for power consumption

What could be done better: It could be much more straight forward when there were less dependencies on external repositories / docker-hub. Almost all issues that came up with containers where from the fact that the maintainers had just a day before introduced something that kept them from spinning up naturally. Bad luck. But that can be helped! There’s now a multi-page disaster-recovery-procedure document that will be used and updated in the future.

Oh and what speeds am I seeing? The promissed 3 Gbyte/s read and write speeds are real. It’s quite impressive to see 4-digit megabyte/s values in iotop frequently.

I almost forgot! During this exercise I had been in the server room less than 30 minutes. But I was on a warm and nice work-desk set-up I am using in the house as much as I can – and I will tell you about it in another article. But the major feature of this work-desk set-up is that it is (a) a standing desk and (b) has a treadmill under it. Yes. Treadmill.

You will get pictures of the set-up in that mentioned article, but since I had spent more than 10 hours walking on saturday doing the disaster recovery I want to give you a glimpse of what such a set-up means:

46 km while doing disaster recovery successfully.

indoor location tracking with ESP32

This project uses the same approach that I took for my ESP32 based indoor location tracking system (by tracking BLE signal strength). But this project came up with an actual user interface – NICE!

“Indoor positioning of a moving iBeacon, using trilateration and three ESP32 development modules. ESP32 modules report all beacons they see, to MQTT topic. Dashboard subscribes to this topic, and shows the location of beacons which are seen by all three stations.”
(https://github.com/jarkko-hautakorpi/iBeacon-indoor-positioning-demo)