Antenna shops were first established in Tokyo in the early 1990s as a means for regional governments to promote local goods and products to the capital’s many residents. Antenna shops for Okinawa and Kumamoto were the first to arrive on the scene in 1994, with the current total of prefectural outlets totaling around 54 – with 28 shops dotted about the Ginza/Yurakucho area alone. Some prefectures retain only one location, while a handful of prefectures sell their goods in multiple outlets.
I just recently learned about Krita. An open source drawing application that allows you to… oh well… do free-hand drawings.
Krita is a FREE and open source painting tool designed for concept artists, illustrators, matte and texture artists, and the VFX industry. Krita has been in development for over 10 years and has had an explosion in growth recently. It offers many common and innovative features to help the amateur and professional alike. See below for some of the highlighted features.
About a year ago there were some very interesting reports about a german inventor and his invention: a highly futuristic, transforming smartphone airbag.
It would be attached to your phone and when you drop it, it would automatically deploy and dampen the impact.
Impressive, right? There’s now a Kickstarter campaign behind this to deliver it as a product. All very nice and innovative.
I have no usue of a smartphone airbag of some sort. But hear me out on my train of thought:
I do partake in the hobby of quadcopter flying. I’ve built some myself in the past.
Now these quadcopters are very powerful and have very short flight times due to their power-dynamics. 4-5 Minutes and you’ve emptied a LiPo pack.
Model airplanes, essentially everything with wings, flys much much longer.
My thought now: Why not have a convertible drone.
When the pilot wants a switch could be flipped and it would convert a low-profile quadcopter to a low-profile quadcopter with wings. Similar to how the above mentioned smartphone “airbag”.
I don’t know anything about mechanics. I have no clue whatsoever. So go figure. But what I do know: the current path of the mini-quad industry is to create more powerful and bigger “mini”-quadcopters. And this is a good direction for some. It’s not for me. Having a 10kg 150km/h 50cm projectile in the air that also delivers a 1kg Lithium-Polymer, highly flammable and explosion-ready battery pack does frighten me.
Why not turn the wheel of innovation into the convertible-in-air-with-much-longer-flight-times direction and make the mini-quadcopters even more interesting?
MyFitnessPal is a great online service we are using to track what we eat. It’s well integrated into our daily routine – it works!
Unfortunately MyFitnessPal is not well set-up to interface 3rd party applications with it. In fact it appears they are actively trying to make it harder for externals to utilize the data there.
To access your data there’s an open source project called “python-myfitnesspal” which allows you to interface with MyFitnessPal from the command line. This project uses web-scraping to extract the information from the website and will break everytime MyFitnessPal is changing the design/layout.
Since the output for this would be command line text output it is not of great use for a standardized system. What is needed is to have the data sent in a re-useable way into the automation system.
This is why I wrote the additional tool “myfitnesspal2mqtt“. It takes the output provided by python-myfitnesspal and sends it to an MQTT topic. The message then can be decoded, for example with NodeRed, and further processed.
In the end it expands into a multitude of topics with one piece of information per MQTT topic.
And with just that every time the script is run (which I do in a docker container and with a cronjob) the whole lot of pieces of information about nutrition and health stats are being pushed and stored in the home automation system.
This way they are of course also available to the home automation system to do things with it.
This experiment great since it’s completely effortless. You link your block lists once and from thereon you keep using Twitter like you always did. Whenever you see a paid promotion you “block it”. Everybody from thereon will not see promotions and timeline entries from this specific Twitter user (unless you would actively follow them).
And the effect after about a week is just great! I cannot see a downside so far but the amount of promotion content on my timeline has shrunk to a degree where I do not see any at all.
This is a great way to get rid of content you’ve never wanted and focus on the information you want.
In 2012 I’ve experienced streamed game play for the first time. I was a beta-user of the OnLive service which created a bit of fuzz back then.
Last week Google had announced to step into the game streaming business as well. They’ve announce Google Stadia as the Google powered game streaming platform. It would come with it’s own controller.
And this controller is the most interesting bit. We have seen video live streaming. We have seen and played streamed games. But every time we needed some piece of software or hardware that brought screen, controller and player together.
The Google Stadia controllers now do not connect to the screen in front of you. The screen, by all it knows, just shows a low-latency video/audio stream.
The controller connects to your wifi and directly to the game session. Everything you input with the controller will be directly sent to the Google Stadia session in a Google datacenter. No dedicated console hardware in between. And this will make a huge difference. Because all of a sudden the screen only is a screen. And the controller will connect to the “cloud-console” far-far away. As if it was sitting right below the screen. This will make a huge difference!
Last week we were approached by Prof. Dr. Nicole Zillien from Justus-Liebig-University in Gießen/Germany. She explained to us that she currently is working on a book.
In this book an empirical analysis is carried out on “quantified-self” approaches to real life problems.
With the lot of information and data we had posted on our personal website(s) like this blog and the “loosing weight” webpage apparently we qualified for being mentioned. We were asked if it would be okay to be named in the book or if we wanted to be pseudonymized.
Since everything we have posted online and which is publicly accessible right now can and should be quoted we were happy to give a go-ahead. We’re publishing things because we want it to spur further thoughts.
It will be out at the end of 2019 / beginning of 2020. As soon as it is out we hope to have a review copy to talk about it in this blog once again.
We do not know what exactly is being written and linked to us – we might as well end up as the worst example of all time. But well, then there’s something to learn in that as well.
A couple of days ago I wrote about a Japanese pun I came across while surfing and doing language learning. I wanted to know more about how these kind of language-tricks work in Japan. And this is what I’ve found:
“Japanese puns, or 駄洒落 (dajare), can be not only groan- or laughter-inducing, but they can also help you improve both the depth and breadth of your language ability.”
Ever since I’ve first visited Tokyo in 2012 I fell in love with country, culture and the city. On average I was there 4 times a year to do business.
After leaving Rakuten I went back to Tokyo for a vacation together with my wife in October 2017. The idea was to show her what I was enthusiastically mumbling about all the time when I came back from Japan.
When staying in Tokyo I’ve stayed in different areas across the city. From very center to not-so-much-center. Given the great public transportation and taxi system in Tokyo it always was a great experience.
So after a couple of times I developed a preference for an area that was in walking distance to the Rakuten office, was well connected to the public transport system and offered all sorts of starting-points for daily life on a longer term. It ticked a lot of boxes.
The areas name is Musashi-Kosugi (武蔵小杉). And it actually is in the city of Kawasaki in Kanagawa prefecture. Effectively just across the Tama river from Ota-city in Tokyo prefecture.
Like any great neighborhood everything is conveniently close and the service everywhere is spotless. The hotel of preference is fairly priced and extremely close to the two train stations. So you can get anywhere quick by train.
You can see the hotel location and the train tracks pretty well on this next map. The red portion shows the viewing direction of the night-picture below.
And like any great neighborhood there’s loads of current information available and lots of community activities around the year. In the case of Musashi-Kosugi you can have the more official website and the more up-to-date blog.
If you plan to visit Tokyo I can only recommend you take a look at more off center options of accomodation. I’ve always enjoyed being able to leave the center of buzz like Shibuya, Ropongi and get back into my bubble of quietness without compromising on everything else than party-and-entertainment options. Actual longer-term daily-life is much more enjoyable off-center – as you can imagine.
And for the end of this post: Let us enjoy a sunset with parts of the Musashi-Kosugi skyline:
It had been mentioned before: regarding my own health until 2015 I did not have any structure and understanding when it came to food, sports, weight and all that is connected.
I used to be around 147kg (324lbs) when I made the decision to change that. Less than 8 months later I reached the weight that is considered “normal weight” for my body height.
Maybe more important than losing weight and going from class III obesity to normal is to keep in the range offering the biggest health benefits.
Apart from sport and being more active the key to managing weight in general for me was to understand and keep learning about food from many different perspectives.
Like in good science to learn more and make progress you got to listen and take well notes. Some things only can be understood when there’s enough historical data available. Not only the amount of time logged but also what is being logged is quite important.
In a normal week I am used to one-meal-a-day which I am preparing myself together with my wife. We eat in the evening – this makes things so much easier as there’s one time and space where everything regarding food comes together during routine days.
Also this routine works well in the long run. If we only got 10 minutes to prepare we will be able to hit the nutrition targets with either much or not-so-much quantities of food. In the last 4 years we practiced and played and created recipes for all situations you can think of. In a way we have made our simple-and-healthy-recipes the fall-back position we are using when we otherwise would have eaten something unhealthy.
Where do we track? There are so many options but since 2015 we stick to MyFitnessPal.
We are still happy using this as the base for tracking as the app is bearable even on Android and the food database that it uses is offering a good-enough data detail level.
So after more than 4 years of doing this a lot of data as come together. As I am doing it in sync with my wife a lot of things happened…
We have not become vegans. We eat meat still and we still like it. It’s just that the quality of meat we eat has gotten much better and with this the number of times we eat meat have reduced to maybe once a week at most.
We have started to eat things and experiment cooking with ingredients we did not know a year ago. While we keep adding ingredients all the time we find that you can optimize and gain so much joy from just jumping in head first into new tastes and recipes.
We’ve developed a “body-feel”. Apart from the taste buds changing completely over time I could not have thought of how much food influences how you feel. Different nutritious values lead to very different feels afterwards. I would go as far to say that most of the headaches I very frequently had while being overweight could have been traced back to what I’ve eaten just before.
So what now? We will keep tracking. Maybe not on a cloud service but on our self-hosted service. Maybe you got a good hint towards such self-hosted solutions to enter and track nutrition over time.
This feature replaces numeral glyphs set on glyph-specific (proportional) widths with corresponding glyphs set on uniform (tabular) widths. Note that some fonts may contain tabular figures by default, in which case enabling this feature may not appear to affect the width of glyphs.
I’ve finished my little coding exercise today. With a good sunday afternoon used to understand and develop an iOS and Watch application from scratch I just handed it in for Apple AppStore approval.
The main purpose, aside from the obvious “learning how it’s done”, is that I actually needed a couple of complications on my watch that would show me the current day/date in the discordian calendar.
I have to say that the overall process of developing iOS and Watch applications is very streamlined. Much much easier than Android development.
The WatchKit development was probably the lesser great experience in this project. There simply is not a lot of code / documentation and examples for WatchKit yet. And most of them are in Swift – which I have not adapted yet. I keep to Objective-C for now still. With Swift at version 5 and lots of upgrades I would have done in the last years just to keep up with the language development… I guess with my choice to stick to Objective-C I’ve avoided a lot of work.
Anyhow! As soon as the app is through AppStore approval I will write again. Maybe somebody actually wants to use it also? :-)
With writing the app I just came up with the next idea for a complication I just really really would need.
In a nutshell: A complication that I can configure to track a certain calendar. And it will show the time in days/hours/minutes until the next appointment in that specific calendar. I will have it set up to show “how many hours till wakeing up”.
In the interesting field of IoT a lot of buzz is made around the predictive maintenance use cases. What is predictive maintenance?
The main promise of predictive maintenance is to allow convenient scheduling of corrective maintenance, and to prevent unexpected equipment failures.
The key is “the right information in the right time”. By knowing which equipment needs maintenance, maintenance work can be better planned (spare parts, people, etc.) and what would have been “unplanned stops” are transformed to shorter and fewer “planned stops”, thus increasing plant availability. Other potential advantages include increased equipment lifetime, increased plant safety, fewer accidents with negative impact on environment, and optimized spare parts handling.
So in simpler terms: If you can predict that something will break you can repair it before it breaks. This improvse reliability and save costs, even though you repaired something that did not yet need repairs. At least you would be able to reduce inconveniences by repairing/maintaining when it still is easy to be done rather than under stress.
You would probably agree with me that these are a very industry-specific use cases. It’s easy to understand when it is tied to an actual case that happened.
Let me tell you a case that happened here last week. It happened to Leela – a 10 year old white British short hair lady cat with gorgeous blue eyes:
Ever since her sister had developed a severe kidney issue we started to unobtrusively monitor their behavior and vital signs. Simple things like weight, food intake, water intake, movement, regularities (how often x/y/z).
When Leela now visits her litter box she is automatically weighed and it’s taken note that she actually used it.
A lot of data is aggregated on this and a lot of things are being done to that data to generate indications of issues and alerts.
This alerted us last weekend that there could be an issue with Leelas health as she was suddenly visiting the litter box a lot more often across the day.
We did not notice anything with Leela. She behaved as she would everyday, but the monitoring did detect something was not right.
What had happened?
On the morning of March 9th Leela already had been to the litter box above average. So much above average that it tripped the alerting system. You can see the faded read area in the top of the graph above showing the alert threshold. The red vertical line was drawn in by me because this was when we got alerted.
Now what? She behaved totally normal just that she went a lot more to the litter box. We where concerned as it matched her sisters behavior so we went through all the checklists with her on what the issue could be.
We monitored her closely and increased the water supplied as well as changed her food so she could fight a potential bladder infection (or worse).
By Monday she did still not behave different to a degree that anyone would have been suspicious. Nevertheless my wife took her to the vet. And of course a bladder infection was diagnosed after all tests run.
She got antibiotics and around Wednesday (13th March) she actually started to behave much like a sick cat would. By then she already was on day 3 of antibiotics and after just one day of presumable pain she was back to fully normal.
Interestingly all of this can be followed up with the monitoring. Even that she must have felt worse on the 13th.
With everything back to normal now it seems that this monitoring has really lead us to a case of “predictive cat maintenance”. We hopefully could prevent a lot of pain with acting quick. Which only was possible through the monitoring in place.
Health is a huge topic for the future of devices and gadgets. Everyone will casually start to have more and more devices in their daily lifes. Unfortunately most of those won’t be under your own control if you do not insist on being in control.
You do not have to build stuff yourself like I did. You only need to make the right purchase decisions according to things important to you. And one of these things on that checklist should be: “am I in full control of the data flow and data storage”.
If you are not. Do not buy!
By coincidence the idea of having the owner of the data in full control of the data itself is central to my current job at MindSphere. With all the buzz and whistles around the Industry IoT platform it all breaks down to keep the actual owner of the data in control and in charge. A story for another post!
You want or you have to use shells – command line interfaces. And it’s something that always leads to stackoverflow / google sessions. Or you’re studying man-pages for hours.
But there’s a better way to view and understand these man-pages. There’s explainshell.com. Here is an example of what it can do:
As you can see it not only takes one command and shows you the meaning/function of a parameter. But it takes complex structured commands and unfolds it for you nicely onto a web page. Even the harder examples:
Cascading Style Sheets or CSS in short are a very powerful tool to control how content is being displayed.
CSS is designed to enable the separation of presentation and content, including layout, colors, and fonts. This separation can improve content accessibility, provide more flexibility and control in the specification of presentation characteristics, enable multiple web pages to share formatting by specifying the relevant CSS in a separate .css file, and reduce complexity and repetition in the structural content. Separation of formatting and content also makes it feasible to present the same markup page in different styles for different rendering methods, such as on-screen, in print, by voice (via speech-based browser or screen reader), and on Braille-based tactile devices. CSS also has rules for alternate formatting if the content is accessed on a mobile device.
I frequently come across content I want to read. And almost as frequently I do not have time for a longer read when I come across interesting content.
My workflow for this is: keeping some to-be-read backlog of PDF files I have printed from websites. These PDF files are automatically synced to various devices and I can read them at a later stage.
What often is frustrating to see: bad the print results of website layouts as these websites have not even thought of the remote option of being printed.
With this blog I want to support any workflow and first and foremost my own. Therefore printing this blog adds some print-audience specifics.
For example the links I am using in the articles are usually inline when you are using a browser. When you’re printing the article those links get converted and are being written out with the text. So you can have them in your print-outs without loosing information.
And the changes you need to apply to any webpage to instantly enable this are very simple as well! Just add this to your page stylesheet:
Since 2011 we’ve got this Boogie Board in the household. It’s simply a passive LCD panel on which you can write with a plastic pen. When you do you’re interacting with the liquid crytals and you switch their state. So what was black becomes white.
So we got this tablet and it’s magnetically pinned to our fridge. And whenever we’ve booked the next trip we’re crossing off days by coloring them in a grid.
My usual twitter use looks like this: I am scrolling through the timeline reading up things and I see an ad. I click block and never again will I see anything from this advertiser. As I’ve written here earlier.
As Twitter is also a place of very disturbing content there are numerous services built around the official block list functionality. One of those services is “Block Together“.
Block Together is designed to reduce the burden of blocking when many accounts are attacking you, or when a few accounts are attacking many people in your community. It uses the Twitter API. If you choose to share your list of blocks, your friends can subscribe to your list so that when you block an account, they block that account automatically. You can also use Block Together without sharing your blocks with anyone.
For about 2 years now I am using Todoist as my main task management / todo-list service.
This lead to a lot of interesting statistics and usage patterns as this service seems to integrate oh-so-nicely into a lot of daily tasks.
What kind of integration is it? Glad you asked!
At first we were using all sorts of different ways to manage task lists across the family with the main lists around everything evolved being the personal tasks and todos of each family member as well as the obvious groceries shopping list.
We’ve been happy customers of Wunderlist before but then Microsoft bought it and announced they will shut it down soon and replace it with “Todo” out of Office 365. Not being an Office 365 customer did lead to a dead-end on this path.
And then Amazon Alexa showed up and we wanted to naturally use those assistants around the house to add things to shopping and todo lists right away. Unfortunately neither Wunderlist nor the intermediate solution Toodledo were integrated with Alexa.
Then there suddenly was a window of opportunity We wanted Alexa integration and at least all the features we knew from Wunderlist and Toodledo and Todoist delivered right out of the box.
It takes todos and shopping items from Alexa, through the website, through Apps, Siri can use it and in general it’s well integrated with lots of services around. You can even send it eMails! Also we’ve never experienced syncing issues whatsoever.
And it’s the little things that really make a difference. Like that Chrome browser integration above.
You see that “Add website as task”? Yes it does exactly what you would expect. Within Chrome and two clicks you’ve added the current website URL and title as a task to any of your lists in Todoist. I’ve never been a fan of favourites / bookmarks in browsers. Because I usually do not store any history or bookmarks for longer. But I always need to add that website to a list to work through later the day. I used to send myself eMails with those links but with this is a much better solution to keep track of those links and not have them pile up over a long time.
Which allows you to marvel at your progress and sun yourself in the immense productivity you’ve shown.
But hey – there’s actual value coming from this. Like if you do it for a year or two you get such nice statistics which show how you did structure your day and how you might be able to improve. Look at a simple yearly graph of how many tasks have been completed at specific times of the day.
So when most people in the office spend their time on lunch breaks I usually complete the most tasks from my task list. Also I am quite early in “before the crowd” and it shows. Lots of stuff done then.
And improvements also show. On a yearly base you can see for example how many tasks you did postpone / re-schedule when. Like those Mondays which are currently the days most tasks get postponed. What to do about that?
I recently wrote about how I am using ThinClients in our house to always have a ready-to-use working environment that get’s shared across different desks and work places.
To complete the zoo of devices I wanted to take the chance and write about another device we’re using when the purpose fits: ChromOS devices.
A little bit over a year ago I was given a HP Chromebook 11 G5 and this little thing is in use ever since.
The hardware itself is very average and works just right. The only two things that could be better are the display and the trackpad. With the trackpad you can help yourself with an external mouse.
The display works for the device size but the resolution being 1366×768 is definitely a limiting factor for some tasks.
What is not a limiting factor, astonishingly, is the operating system. I did not have any expectations at all when I first started using the Chromebook but everything just fell into place as expected. A device with almost no local storage and everything on the google cloud as well as a device that you can simply pick up and start using with just your google account may not sound crazy innovative. But let me tell you: if you start living that thin client, cloud stored life these Chrome OS devices hit the spot perfectly.
Everything updates in the background and as long as you are okay with web based applications or Android based applications you are good to go.
Did I miss anything functionwise? Yes. At the beginning there was no real shell or Linux tools available for Chrome OS natively. This has changed.
Would I buy another one or do I recommend it and for whom? I would buy another one and I would recommend it for certain audiences.
I would recommend it for anyone who does not need to game anything not available in the Google Playstore – anything that can be done on the web can be done with the Chromebook. And as long as there is not the requirement of anything native or higher-spec that requires you to have “Windows-as-a-hobby” or a beefy MacOS device sitting around I guess these inexpensive Chrome OS devices really have their niche.
For kids – I guess this would make a great “my-first-notebook” as it works when you need it and does not lock you in too much if you wanted to start exploring. But then again: what do I know – I do not have kids.
Learning a new language is full of discoveries along the way!
As I am spending more time on learning the Japanese language the more different things seem to unlock. One of those things is the apparent fun Japanese companies have with puns/slight writing mismatches.
Like this one – I think (as I can not be 100% sure yet…learning!):
This is an advertisement in a supermarket for a laundry detergent. It is themed to an Anime called “Attack on Titan” – properly because the detergents name is Attack. So when I tried to make sense of the text I first read it wrong, of course.
Let’s look at it step-by-step:
I first started reading the Hiragana portion and make sense of it. There I made my first mistake which is to misread the first second character. For some reason my brain went for わ (wa) when I should have gone for れ (re).
Then I typed away further and came to the Kanji. I read a 活 (katsu) when it in fact was a 汚 (kitanai).
Given that you’ve typed those into Google Translate you will get very interesting results. I had a good laugh by then:
I am not sure if this is on purpose or not – as I do not yet know if I am just making a mess on this or if this is intentionally done so that, given your level of Japanese reading and attention-spent reading it, you get very different and funny results.
Any Japanese readers that can add some explanations? Am I far off with the thoughts?
When you want to make things happen on a schedule or log them down when they took place a calendar is a good option. Even more so if you are looking for an intuitive way to interact with your home automation system.
Calendars can be shared and your whole family can have them on their phones, tablets and computers to control the house.
In general I am using the Node-Red integration of Google Calendar to send and receive events between Node-Red and Google. I am using the node-red-node-google package which comes with a lot of different options.
Of course when you are using those nodes you need to configure the credentials
Part 1: Control
So you got those light switches scattered around. You got lots of things that can be switched on and off and controlled in all sorts of interesting ways.
And now you want to program a timer when things should happen.
For example: You want to control when a light is being switched on and when it’s then again been switched off.
I did create a separate calendar on google calendar in which I am going to add events to in a notation I came up with: those events have a start-datetime and of course an end-datetime.
When I now create an event with the name “test” in the calendar…
And in Node-Red you would configure the “google calendar in”-Node like so:
When you did wire this correctly everytime an event in this calendar starts you will get a message with all the details of the event, like so:
With this you can now go crazy on the actions. Like using the name to identify the switch to switch. Or the description to add extra information to your flow and actions to be taken. This is now fully flexible. And of course you can control it from your phone if you wanted.
Part 2: Information
So you also may want to have events that happened logged in the calendar rather than a plain logfile. This comes very handy as you can easily see this way for example when people arrived home or left home or when certain long running jobs started/ended.
To achieve this you can use the calendar out nodes for Node-Red and prepare a message using a function node like this:
And as said – we are using it for all sorts of things – like when the cat uses her litter box, when the washing machine, dryer, dishwasher starts and finishes. Or simply to count how many Nespresso coffees we’ve made. Things like when members of the household arrive and leave places like work or home. When movement is detected or anything out of order or noteable needs to be written down.
And of course it’s convenient as it can be – here’s the view of a recent saturday:
I was asked recently how I did enable my home automation to send push notifications to members of the household.
The service I am using on which all of our notification needs are served by is PushOver.
Pushover gives you a simple API and a device management and allows you to trigger notifications with icons and text to be sent to either all or specific devices. It allows to specify a message priority so that more or most important push notifications even are being pushed to the front when your phone is set on do-not-disturb.
The device management and API, as said, is pretty simple and straight forward.
As for the actual integration I am using the NodeRed integration of Pushover. You can find it here: node-red-contrib-pushover.
With the newest client for iOS it even got integration for Apple Watch. So you not only are limited to text and images. You can also send our a state that updates automatically on your watch face.
As Pushover seems consistent in service and bringing updates I don’t miss anything – yet I do not have extensively tested it on Android.
I had redone the header of this blog a while ago but since I was trying around some things on the template I wanted something more dynamic but without any additional dependencies.
So I searched and found:
Tim Holman did a very nice implementation of this “worm generator” with only using the HTML5 canvas tag and some math. I made some very slight changes and integrated it into the header graphic. It will react to your mouse movement and resets if you click anywhere. Give it a go!
I had to solve a problem. The problem was that I did not wanted to have the exact same session and screen shared across different work places/locations simultaneously. From looking at the same screen from a different floor to have the option to just walk over to the lab-desk solder some circuits together and have the very same documents opened already and set on the screens over there.
One option was to use a tablet or notebook and carry it around. But this would not solve the need to have the screen content displayed on several screens simultaneously.
Also I did not want to rely on the computing power of a notebook / tablet alone. Of course those would get more powerful over time. But each step would mean I would have to purchase a new one.
Then in a move of desperation I remembered the “old days” when ThinClients used to be the new-kid in town. And then I tried something:
It turns out: Nothing really. Docker is well prepared to host desktop environments. With a bit of tweaking and TigerVNC Xvnc I was able to pre-configure the most current Ubuntu to start my preferred Mate desktop environment in a container and expose it through VNC.
So I took one of those RaspberryPis, booted up the Raspbian Desktop lite and connected to the dockers VNC port. It all worked just like that.
The screenshot above holds an additional information for you. I wanted sound! Video works smooth up to a certain size of the moving video – after all those RaspberryPis only come with sub Gbit/s wired networking. But to get sound working I had to add some additional steps.
First on the RaspberryPI that you want to output the sound to the speakers you need to install and set-up pulseaudio + paprefs. When you configure it to accept audio over the network you can then configure the client to do so.
In the docker container a simple command would then redirect all audio to the network:
pax11publish -e -S thinclient
Just replace “thinclient” with the ip or hostname of your RaspberryPI. After a restart Chrome started to play audio across the network through the speakers of the ThinClient.
Now all my screens got those RaspberryPIs attached to them and with Docker I can even run as many desktop environments in parallel as I wish. And because VNC does not care about how many connections there are made to one session it means that I can have all workplaces across the house connected to the same screen seeing the same content at the same time.
And yes: The UI and overall feel is silky smooth. And since VNC adapts to some extend to the available bandwidth by changing the quality of the image even across the internet the VNC sessions are very much useable. Given that there’s only 1 port for video and 1 port for audio it’s even possible to tunnel those sessions across to anywhere you might need them.
Working in the IT industry requires us to spend copious amounts of time focused on our screens mostly sitting at our desks. But this does not have to be that way.
For me sitting down for long times creates a lot of unwanted effects and essentially leads to me not being able to focus anymore properly.
In 2015 my wife and I attacked that “health problem” as a team. And in the 12 months until 2016 we both lost 120 kg / 260lbs added up together in body weight and completely changed the way we deal with food and sport.
With that I also changed the way I work. Sitting down was from now on the exception.
Coincident with this lifestyle change my then-employer Rakuten rolled out it’s then new workplace concept and everyone got great electric stand-up desks that allowed you to change the height up and down effortlessly.
When I started with SIEMENS of course their workplace concept included standing desks as well!
For those times I am working from home one of the desks is equipped with a standing desk with an additional twist.
So this desk let’s you work while standing. But it also allows you to walk while you work. You can set the speed from 0 to 6.4 km/h.
Given a good headset I personally can attend conference calls without anyone noticing I am walking with about 4 km/h paces.
When I am spending a whole day working from this desk it is not uncommon to accumulate 25-40 km of total distance without really noticing it while doing so. Of course: later the day you’ll feel 40km in one way or the other
It took a bit of getting used to as your feet are doing something entirely different from what the rest of the body is doing. But at least for me it started to feel natural very quickly.
I’ve put two curved 24″ monitors onto it and aside from the docking ports for a company notebook I am using thinclients to get my usual work machines screens teleported there. There’s a bit of a media set-up as well as sometimes I am using one of the screens for watching videos.
For those now interested in the purchase of such a great walking desk: I can only recommend doing so! But be aware of some thoughts:
There are not a lot of vendors of such appliances. And those vendors are not selling a lot of them. This means: be ready for a € 1000+ purchase and be ready to shell out some good money on extended warranties.
My first desk + treadmill was replaced 3 times before. It was LifeSpans first generation of treadmill desks and it just kept exploding. I actually had glowing sparks of fire spitting out of the first generation treadmill.
I’ve returned it for no money loss and waited for the second generation. This current, second generation of LifeSpan treadmill desks is really doing it for me for longer than the first generation ever had without breaking. Looking at the use of the device I would see it as a purchase over 5 years. After 5 years of actual and consistent use I wouldn’t be overly annoyed if the mechanical parts of the appliance would stop working. I am not expecting such a device to live much longer anyhow.
Energy consumption wise it’s quite impressive how much energy this thing consumes. I wasn’t quite expecting those levels. So here’s for you to know:
So just around 500 Watts when in use. The 65W base load is the monitors and computers on top.
I can only recommend to try something like this out. Unfortunately it’s quite hard to find a place to try it out. At least I was not able to try before buy.
But then again I could answer your questions if you had any.
Since a couple of months we are trying harder to learn a foreign language.
And as we excepted it is very hard to get a proper grasp on speaking the language. Especially since it is a very different language to our mother tongue.
And while comfortably interacting with digital assistants around the house every day in english and german the thought came up: why don’t these digital assistants help with foreign language listening and speaking training?
I mean Google Assistant answers questions in the language you have asked them. Siri and Alexa need to know upfront in which language you are going to ask questions. But at least Alexa can translate between languages…
But with all seriousness: Why do we not already have the obvious killer feature delivered? Everyone could already have a personal language training partner…
When you take a picture with an iPhone these days it does generate haptic feedback – a “kachung” you can feel. And a shutter sound.
Thankfully the shutter sound can be disabled in many countries. I know it can’t be disabled on iPhones sold in Japan. Which kept me from buying mine in Tokyo. Even when you switch the regions to Europe / Germany it’ll still produce the shutter sound.
Anyway: With my iPhone, which was purchased in Germany, I can disable the shutter sound. But it won’t disable the haptic “kachung”.
It’s interesting that Apple added this vibration to the activity of taking a picture. Other camera manufactures go out of their way to decouple as much vibration as possible even to the extend that they will open the shutter and mirror in their DSLRs before actually making the picture – just so that the vibration of the mirror movement and shutter isn’t inducing vibrations to the act of taking the picture.
With mirror less cameras that vibration is gone. But now introduced back again?