css font-feature “tnum”

Oh this is so useful for my head-up-display prototype implementation:

This feature replaces numeral glyphs set on glyph-specific (proportional) widths with corresponding glyphs set on uniform (tabular) widths. Note that some fonts may contain tabular figures by default, in which case enabling this feature may not appear to affect the width of glyphs.

tabular figures: tnum

the discordian calendar on your wrist

I’ve finished my little coding exercise today. With a good sunday afternoon used to understand and develop an iOS and Watch application from scratch I just handed it in for Apple AppStore approval.

The main purpose, aside from the obvious “learning how it’s done”, is that I actually needed a couple of complications on my watch that would show me the current day/date in the discordian calendar.

I have to say that the overall process of developing iOS and Watch applications is very streamlined. Much much easier than Android development.

The WatchKit development was probably the lesser great experience in this project. There simply is not a lot of code / documentation and examples for WatchKit yet. And most of them are in Swift – which I have not adapted yet. I keep to Objective-C for now still. With Swift at version 5 and lots of upgrades I would have done in the last years just to keep up with the language development… I guess with my choice to stick to Objective-C I’ve avoided a lot of work.

Anyhow! As soon as the app is through AppStore approval I will write again. Maybe somebody actually wants to use it also? :-)

With writing the app I just came up with the next idea for a complication I just really really would need.

In a nutshell: A complication that I can configure to track a certain calendar. And it will show the time in days/hours/minutes until the next appointment in that specific calendar. I will have it set up to show “how many hours till wakeing up”.

IoP – the internet of pets – predictive maintenance of a cat

In the interesting field of IoT a lot of buzz is made around the predictive maintenance use cases. What is predictive maintenance?

The main promise of predictive maintenance is to allow convenient scheduling of corrective maintenance, and to prevent unexpected equipment failures.

The key is “the right information in the right time”. By knowing which equipment needs maintenance, maintenance work can be better planned (spare parts, people, etc.) and what would have been “unplanned stops” are transformed to shorter and fewer “planned stops”, thus increasing plant availability. Other potential advantages include increased equipment lifetime, increased plant safety, fewer accidents with negative impact on environment, and optimized spare parts handling.

Wikipedia

So in simpler terms: If you can predict that something will break you can repair it before it breaks. This improvse reliability and save costs, even though you repaired something that did not yet need repairs. At least you would be able to reduce inconveniences by repairing/maintaining when it still is easy to be done rather than under stress.

You would probably agree with me that these are a very industry-specific use cases. It’s easy to understand when it is tied to an actual case that happened.

Let me tell you a case that happened here last week. It happened to Leela – a 10 year old white British short hair lady cat with gorgeous blue eyes:

Ever since her sister had developed a severe kidney issue we started to unobtrusively monitor their behavior and vital signs. Simple things like weight, food intake, water intake, movement, regularities (how often x/y/z).

I’ve built hardware to allow us to do that in the most simple and automated way. In the case of getting to know their weight we would simply put the kitty litter box on a heavily modified persons scale. I wrote about that already back int 2016.

When Leela now visits her litter box she is automatically weighed and it’s taken note that she actually used it.

A lot of data is aggregated on this and a lot of things are being done to that data to generate indications of issues and alerts.

This alerted us last weekend that there could be an issue with Leelas health as she was suddenly visiting the litter box a lot more often across the day.

We did not notice anything with Leela. She behaved as she would everyday, but the monitoring did detect something was not right.

What had happened?

The chart shows the hourly average and daily total visits to the litterbox.

On the morning of March 9th Leela already had been to the litter box above average. So much above average that it tripped the alerting system. You can see the faded read area in the top of the graph above showing the alert threshold. The red vertical line was drawn in by me because this was when we got alerted.

Now what? She behaved totally normal just that she went a lot more to the litter box. We where concerned as it matched her sisters behavior so we went through all the checklists with her on what the issue could be.

We monitored her closely and increased the water supplied as well as changed her food so she could fight a potential bladder infection (or worse).

By Monday she did still not behave different to a degree that anyone would have been suspicious. Nevertheless my wife took her to the vet. And of course a bladder infection was diagnosed after all tests run.

She got antibiotics and around Wednesday (13th March) she actually started to behave much like a sick cat would. By then she already was on day 3 of antibiotics and after just one day of presumable pain she was back to fully normal.

Interestingly all of this can be followed up with the monitoring. Even that she must have felt worse on the 13th.

With everything back to normal now it seems that this monitoring has really lead us to a case of “predictive cat maintenance”. We hopefully could prevent a lot of pain with acting quick. Which only was possible through the monitoring in place.

Monitoring pets is seemingly becoming a thing – which lead to my rather funky post title declaration of the “Internet of Pets”. I know about a certain Volker Weber who even wrote in the current c’t magazine about him monitoring his dogs location.

Health is a huge topic for the future of devices and gadgets. Everyone will casually start to have more and more devices in their daily lifes. Unfortunately most of those won’t be under your own control if you do not insist on being in control.

You do not have to build stuff yourself like I did. You only need to make the right purchase decisions according to things important to you. And one of these things on that checklist should be: “am I in full control of the data flow and data storage”.

If you are not. Do not buy!

By coincidence the idea of having the owner of the data in full control of the data itself is central to my current job at MindSphere. With all the buzz and whistles around the Industry IoT platform it all breaks down to keep the actual owner of the data in control and in charge. A story for another post!

exercise: develop a Watch app + complication

I’ve started to write a watch app for iOS/WatchOS which is going to display the current calendar information according to the discordian calendar.

Since there’s no watch support on any of the calendar apps in the AppStore and I wanted to have easy to use watchface support I had to try it myself.

I will update here on the progress but so far it looks like this:

two factor mandatory for apple developers

Apple has started to force developers that want to develop and publish on the MacOS and iOS platform to enable two-factor authentication.

Two-factor authentication (also known as 2FA) is a type, or subset, of multi-factor authentication. It is a method of confirming users’ claimed identities by using a combination of two different factors: 1) something they know, 2) something they have, or 3) something they are.

wikipedia

When I just got around enabling it for one of the apple accounts I’ve got there seems to be a much much higher security barrier in place already…

That’s probably some sort of zero-factor no-authentication. I guess. Anyway: Kudos to Apple for finally forcing people to minimum standards. Properly integrating the second factor will make this so much simpler for users. Apples ecosystem solution already is quite well integrated.

Have you switched all your daily used services to two-factor authentication yet?

fonts for your programming needs

We are looking at our screens more and more time of the day and most of that time we are reading or writing text. Text needs to look pretty for our eyes not to get sore – apart from the obvious “being able to tell what letter that is” there is a big portion of personal taste and preference when it comes to the choice of the font.

Most of the texts I am writing benefit from monospaced fonts.

This blog celebrates monospaced fonts for programming.
So many fonts have popped up in recent years.

programmingfonts.org/about

Of course there’s a nice page available that previews the fonts right in your browser:

Head Up Display esthetics

Many cars these days come with head up displays. These kind of displays are used to make information like the current speed appear “floating” over the street ahead right in your field of vision.

This has the clear advantage that the driver can stay focused on the street rather than looking away from the street and to the speedometer.

As practical as it seems these displays are not easy to build and seemingly not easy to design. Every time I came across one it’s built-in functionalities where limited in a way that I only can assume not a lot of thought had gone into what exactly would the driver like to see and how that would be displayed. There was always so much left to desire.

Apparently the technology behind these HUDs is at a point where it’s quite affordable to start playing with some ideas to retrofit a car with a more personal and likeable version.

So I started to take a look at what is available – smart phones have bright displays and I had never tried to see what happens when you try to utilize them to project information into the windshield. So I tried.

As you can see – bright enough, readable but hazy and not perfectly sharp. The reason is quite simple:

“In the special windshield normally used, the transparent plastic safety material sandwiched in between the two pieces of glass must have a slight and very precise wedge, so that the vehicle operator does not see a HUD double image.”

laserfocusworld

There are some retrofit adhesive film solutions available that claim to help with that. I have not tried any yet. To be honest: to my eye the difference is noticeable but not a deal-breaker.

So I’ve tried apps available. They work. But they do a lot of things different from how I would have expected or done them. They are bearable, but I think it could be done better.

tldr: I started prototyping away and made a list of things that need to be done about the existing HUD applications.

mirrored basic html prototype, not well adjusted, just to play…

Here’s my list of what I want to achieve:

  • display orientation according to driving direction – I had expected all HUD applications to do this. They know the driving direction. They know how the device is oriented in space. They can tell which direction the windshield is. They know how to correctly turn the screen. They do not do that. None of them.
  • fonts and numbers – I cannot stand the numbers jumping around when they change up and down
  • speed steps interpolation – GPS only delivers a speed update every second or so. In this time speed might jump up and down by more than +1. The display has 60 fps and gyros to play with and interpolate… I want smooth number transitions.
  • have an “eco-meter” – using gyros the HUD would be able to display harsh accelleration and breaking. Maybe display a color-coded bar and whatever is measured is reflected in the bar going left or right…
  • speed-limit display – apparently this is a huge issue looking at the data availability. There seems to be open-street-map data and options to contribute. Maybe that can be added.
  • have a non-hud mode – non mirrored to use for example to set speed limits and contribute to OpenStreetMap this way!
  • automatically switch between HUD and non-HUD mode – because the device knows it’s orientation in space – if you pick it up from the dashboard and look into it, why not automatically switch?
  • speed zones color coding – change the color of the speed display depending on configurable speed regions. 0-80 is green, 80-130 is yellow, 130-250 is red.
  • turn display off when car stopped – if there’s nothing displayed or needs to be displayed, for example because the car stopped the display can be turned off completely on it’s own.

Navigation is of limited value as the only way I could think of adding value would be a serious AR solution that uses the whole windshield. Now I’ve got these small low-power projectors around… that get’s me thinking…

What would you want to have in such a HUD in your car?

Apple Airplay for SONOS (in Docker)

We’ve got a couple of SONOS based multi-room-audio zones in our house and with the newest generation of SONOS speakers you can get Apple Airplay. Fancy!

But the older hardware does not support Apple Airplay due to it’s limiting hardware. This is too bad.

So once again Docker and OpenSource + Reverse-Engineering come to the rescue.

AirConnect is a small but fancy tool that bridges SONOS and Chromecast to Airplay effortlessly. Just start and be done.

It works a treat and all of a sudden all those SONOS zones become Airplay devices.

There is also a nice dockerized version that I am using.

small and cheap multi-sensor nodes for home automation

I had reported on my efforts to develop an indoor location tracking system previously. Back in 2017 when I started to work on this I only planned to utilize inexpensive EspressIf ESP32 SoCs to look for bluetooth beacons.

In the time between I figured that I could, and should, also utilize the multiple digital and analog input/output pins this specific SoC offers. And what better to utilize it with then a range of sensors that also now could feed their measurements into an MQTT feed along with the bluetooth details.

And there is a whole lot of sensors that I’ve added. On a breadboard it looks like this:

So what do we have here:

  • Motion sensor
  • Temperature sensor
  • Humidity sensor
  • Light sensor
  • Barometric pressure sensor
  • and of course an RGB LED to show a status

The software I’ve done already and after 3 weeks of extensive testing it seems that it’s stable. I will release this eventually later in the process.

I’ve also found plastic cases that fill fit this amount of sensory over the sensor cases I had already bought for the ESP32 alone. For now I’ll close this article with some pictures.

The MQTT feed one of these nodes produces…

…and the Grafana dashboard I am using for this specific prototype device.