Health related Icons for your Apps and Sites

Found that nice heap of Icons that are free to use and high-quality:

Health Icons is a volunteer effort to create a ‘global good’ for health projects all over the world. These icons are available in the public domain for use in any type of project.

The project is hosted by the public health not-for-profit Resolve to Save Lives as an expression of our committment to offer the icons for free, forever.

https://healthicons.org/about

Reading out non-smart (water/gas/…) meters

The only meter in our house that I was not yet able to read out automatically was the water meter.

With the help of a great open source project by the name of AI-on-the-edge and an ESP32-Camera Module it is quite simple to regularly take a picture of the meter, convert it into a digital read-out and send it away through MQTT.

The process is quite simple and straightforward.

  1. Flash the ready made Firmware image to the module
  2. Configure the WiFi using a SD card
  3. Put the module directly over the meter
  4. Connect to it and setup the reference points and the meter recognition marks

As you can see above all the recognition is done on the ESP32 module with its 4MByte of RAM.

With the data sent through MQTT it’s easy to draw nice graphs:

my 4 layer corne split keyboard layout (germany)

I’ve been using my corne split keyboard for about 3 weeks now and during that time I’ve made a couple of changes to the layout.

Right now I am quite happy with the content I am typing these days but I guess over time I am still going to optimize further.

Nevertheless I want to document my layout here, in a picture and with the json file that can be used with the QMK configurator.

building a corne split keyboard

It’s been a while since during a Hack-the-Planet episode I was gifted two PCBs of a corne keyboard by PH_0x17 of Nerdbude and ClickClackHack fame.

Since a picture says more than a thousand words, I give you the result first:

my crkbd based keyboard

This keyboard design is made from the ground up as open source and naturally is fully available as a GIT repository containing everything you need to start: PCB schematics, drawing, documentation and firmware source code.

It took me a couple of months to get all the required parts ordered and delivered. Many small envelopes with parts that seemlingly are only produced by a handful of manufacturers. But anyways: After everything had arrived and was checked for completeness my wife took the hardware parts into her hands and started soldering and assembling the keyboard.

And so this project naturally is split up between my wife and me in the most natural (to us) way: My wife did all the hardware parts – whilst I did the software and interfacing portion. (Admittedly there only was to be figured out how to get the firmware compiled and altered to my specific needs)

Hardware

So make the jump over to the blog of my wife and enjoy the hardware portion over there. Come back for the software portion. I will only leave some pictures of the process here:

Software

After putting the hardware together it was time to get the firmware sorted as well. This keyboard design is based upon the open source QMK (Quantum Mechanical Keyboard) firmware.

Conveniently QMK comes with it’s own build tools – so you will be up and running in no time. Since I had purchased Arduino ProMicro controllers I was good with the most basic setup you can imagine. As the base requirements for the toolchain where minimal I went with the machine that I had in front of me – a Raspberry Pi 4 with the standard Raspberry Pi OS.

These where the steps to get going:

  • get Python 3 and the qmk tool installed – I’ve chosen not to use the tool setup procedure but instead go with a separate clone of the QMK firmware repository.
python3 -m pip install --user qmk
  • clone the QMK firmware repository and get the QMK tool running (in the /bin folder of the firmware repository – it’s actually just a python script)
git clone https://github.com/qmk/qmk_firmware.git
cd qmk_firmware
git submodule sync --recursive
git submodule update --init --recursive --progress
make crkbd:default
  • create your own keymap to work with. You gotta use the crkbd firmware options as a default for this keyboard. The command below will generate a subfolder with the name of your keymap in the keyboards/crkbd/keymaps folder with the default settings of the crkbd keyboard firmware.
qmk new-keymap -kb crkbd
  • build your first firmware by running the command below (note: btk-corne is the name of my keymap)
qmk compile --clean -kb crkbd/rev1/legacy -km btk-corne
success! The first firmware is compiled
  • now you can flash the firmware to both ProMicro controllers. The most straight forward way for me was using avrdude on the commandline. In my case the device is added as /dev/ttyACM0 and the compiled firmware named crkbd_rev1_legacy_btk-corne.hex.

    When you got all this information you need to plug in the ProMicro and trigger a reset by bridging Ground and the Reset Pin. If you added, like we did, a button for reset you can use this. After hitting reset the ProMicro bootloader will enter the state where it’s possible to be flashed. Reset it and THEN run the avrdude commandline.

    The full commandline is:
avrdude -p atmega32u4 -P /dev/ttyACM0 -c avr109  -e -U flash:w:crkbd_rev1_legacy_btk-corne.hex
  • (alternatively) you can also use QMK Toolbox to flash the firmware. Also works.

So now you know how to get the firmware compiled and running (if not, look here further). But most probably you are not happy with some aspects of your keymap or firmware.

By now you might ask yourself: Hey, I’ve got two ProMicros on one keyboard. Both are flashed with the same firmware. Into which of the two do I plug in the USB cable that then is plugged into the computer?

The answer is: by default QMK assumes that you are plugging into the left half of the keyboard making the left half the master. If you prefer to use the right half you can change this behaviour in the config.h file in the firmware:

You have to plug in both of them anyway at times when you want to flash a new firmware to them as you adjust and make changes to your keymap.

Thankfully QMK comes with loads of options and even a very useful configurator tool. I used this tool to adjust the keymap to my requirements. The process there is straightforward again. Open up the configurator and select the correct keyboard type. In my case that is crkbd/legacy. The basic difference between legacy and common is a different communication protocol between the two halves. This really only is important when features are used that require some sort of sync between the two haves – like some RGB LED effects. Since I did not add any LEDs to the build I go with legacy for now. Maybe I need some features later that require me to go with common.

The configurator allows you to set up the whole keymap and upload/download it as a .json file.

That .json file can easily be converted into the C code that you need to alter in the actual keymap.c file. Assuming that the .json file you got is named btk-corne.json the full commandline is:

qmk json2c btk-corne.json

Then simply take this output and replace the stuff in the keymap.c with it:

Now you compile and flash again. And if all went right you’ve got the new keymap and firmware on your keyboard and it’ll work just like that :)

on joining the LuckyV GTA-RP developer team

Disclaimer: I’ve joined for fun and not for profit – this is a new hobby.

For about a year now I was regularly watching some Twitch streamers go along their business and it spawned my curiousity when some of them started to do something they called “GTA V roleplay”.

Grand Theft Auto V (GTA V) is a 2013 action-adventure game developed by Rockstar North and published by Rockstar Games. Set within the fictional state of San Andreas, based on Southern California, the open world design lets players freely roam San Andreas’ open countryside and the fictional city of Los Santos, based on Los Angeles. The game is played from either a third-person or first-person perspective, and its world is navigated on foot and by vehicle.

Wikipedia

So these streamers where mostly using an alternative client application to log into GTA V online servers that where operated by independent teams to play the roles of characters they created themselves.
It started to really get interesting when there is dynamics and interactions happening between those characters and whole stories unfold over the course of days and weeks.

It’s great fun watching and having the opportunity to sometimes see multiple perspectives (by multiple streamers) of the same story and eventually even to be able to interact with the streamers communities.

One such fairly big german server is LuckyV. It’s an alternative GTA V hardcore role-play server creates by players for players.

The hardcore here means: the characters are supposed to act as much as possible like they would in the encountered situations in real life.

So in order to play on this server you have to create a character and the characters background story. You gotta really play that character when on the server.

When you play it’s not just a vanilla GTA V experience. There are lots of features that are specific to the server you are playing on. Some examples are:

  • Communication: you are communicating with people in your vicinity directly – you can hear them if they are close enough to be heard and you can be heard when you are close to people
  • Jobs: there’s lots to be done. Become CEO of your own company and manage it!
  • Social Interaction: there’s probably an event just around the next corner happening. You are able to meet people. Crowds of people even. Remember: There are usually no non-players. Every person you see it a real human who you can interact with.

The LuckyV community made a great overview page where you can watch other people playing and live streaming their journey. It’s extensive – over 200 streamers are online regularly and the screenshot below shows a mid-week day right after lunch…

LuckyV Streamer overview page

Anyhow. This is all great and fun but plot twist: I do not play it. (yet)

So what do I have to do with it except I am watching Streamers? Easy: Behind the game there’s code. Lots of code actually.

In a nutshell there’s a custom-GTA V server implementation that talks to a custom GTA V client. LuckyV is using the altV server and client to expand the functionalities and bring the players into the world.

It allows for 1000 simultaneous players in the same world at a time. So there could be 1000 people right there with you. Actually since LuckyV is about to have it’s first birthday the regular player numbers are peaking at around 450 simultaneous players in Los Santos at a time.

The whole set-up consists of several services all put together:

  • altV server + custom gamemode code (written in C#)
  • web pages for game overlays, in-game UI and administration tools (PHP)
  • a SQL database that holds the item, character etc. data
  • a pub/sub style message hub that enables communication between in-game UI, webpages and the gamemode
  • a TeamSpeak 3 server that allows players to join a common channel (essentially one teamspeak room) and a plug-in called SaltyChat that mutes/unmutes players in the vicinity and allows features like in-game mobile phone etc.
  • everything of the above is in containers and easily deployable anywhere you got enough hardware to run it – when there are 100s of players online the load of the machine grows almost linear – and the machine is doing it’s moneys worth then…

So after the team announced some vacancies through those streamers I watched I contact them and asked if I could help out.

And that’s how I got there working on both the gamemode code as well as helping the infrastructure become more stable and resilient.

For my first real contribution to the gamemode I was asked to implement secondary keys for vehicles as well as apartments/houses.

Up until now only the owner / tenant of the vehicle or apartment had access to it. Since this game is about social interactions it would be a good addition of that owner could hand out additional keys to those they love / interact with.

And that I did. I worked my way through the existing code base – which is a “grown codebase” – and after about 3 days of work it worked!

Most impressive for me is the team and the people I’ve met there. This current team welcomed me warmly and helped me to wrap my head around the patterns in the code. Given the enthusiast / hobby character this has it’s almost frightening how professional and nice everything works out. I mean, we developers had a demo-session with the game design team to show off what our feature does, how it works and to let them try it out to see if it’s like the envisioned it.

They even did a trailer for the feature I worked on! And it is as cheesy as I could only wished:

So far so good: It’s great fun and really rewarding working with all these nice people to bring even more fun and joy to players. Seeing the player numbers grow. Seeing streamers actually use the features and play with it – handing over keys to their partner. Really rewarding.

Like this example:

just at 2hrs 5 Min Ariane Barnes is handing over a key to her loved one.

DOS64

So this is interesting: Normally a Windows program (executable) if you try to run it anywhere else will show a message “cannot be run here” and terminates.

Printing this message is actually done by a little program whos task is to only print out this very message. So it can be overwritten.

Michael Strehovský did exactly this, very impressively. He documented what he did to get the game “snake”, written in C#, running on DOS instead of the “does not run here” stub. In an executable file that would run both, on standard 90s MS-DOS as well as on Windows with the .NET Framework installed.

He used a quite elaborate toolchain – namely DOS64-stub.

You can read all of this in the full thread. I recommend a deeper dive, as it’s a great start to better understand the inner workings of your computer…

Periodensystem der KI

Jeder kennt das »Periodensystem der Elemente« aus dem Chemieunterricht. Das Periodensystem ist ein intuitiver und schneller »Lego-Baukasten«, der uns unterstützt, komplizierte Zusammenhänge zwischen Bausteinen (Atomen) und Molekülen (Naturstoffe, Steine oder Metalle) intellektuell zu erfassen.

Der amerikanische Informatiker Kristian Hammond hat den Versuch unternommen, eine Lingua Franca für künstliche Intelligenz zu konzipieren. In Anlehnung an die Chemie bezeichnet er sie als »Periodensystem der Künstlichen Intelligenz«.

Das Periodensystem der Künstlichen Intelligenz unterstützt dabei, den Begriff KI auf Geschäftsprozesse abzubilden und ein Verständnis der Elemente aufzubauen – ähnlich wie im Periodensystem der chemischen Elemente. Der Ansatz hilft beim Verständnis und bei der Einschätzung von Marktreife, Aufwänden, benötigtem Maschinentraining sowie Wissen und Erfahrungen der Mitarbeiter.

TubeTime and BitSavers

I was pointing to BitSavers before. And I will do it again as it’s a never ending source of joy.

Now some old schematics had been spilled into my feeds that show how logic gates had been implemented with transformers only.

BitSaver brought it up:

And not only BitSaver is on this path of sharing knowledge, also TubeTime is such a nice account to follow and read.

Blender 3D – December was full of content

So with the new year started it might be worth looking into some patterns different from the ones we are usually dealing with. So how about a bit of 3D graphics, shaders and modelling?!

Get your gear:

Blender is the free and open source 3D creation suite. It supports the entirety of the 3D pipeline—modeling, rigging, animation, simulation, rendering, compositing and motion tracking, video editing and 2D animation pipeline.

https://www.blender.org/

And then get a starting point. Be quick, as this is on Twitter it might fade away:

There’s so much interesting stuff in there – and lots to learn!

a proper 7-segment / 14-segment font

DSEG is a free font family, which imitate seven and fourteen segment display(7SEG,14SEG). DSEG have special features:

  • DSEG includes the roman-alphabet and symbol glyphs.
  • More than 50 types are available.
  • True type font(*.ttf) and Web Open Type File Format (*.woff, *.woff2) are in a package.
  • DSEG is licensed under the SIL Open Font License 1.1.

Get it here.

dangerously curious bitcoins

Some things you find on GitHub are more interesting and frightening than others.

This one is both and some more. What is it you ask?

R2 Bitcoin Arbitrager is an automatic arbitrage trading application targeting Bitcoin exchanges.

So it’s buying and selling Bitcoins. And it’s doing this on different markets.
On the topic of arbitrage Wikipedia has something to say:

In economics and finance, arbitrage is the practice of taking advantage of a price difference between two or more markets: striking a combination of matching deals that capitalize upon the imbalance, the profit being the difference between the market prices at which the unit is traded.

For example, an arbitrage opportunity is present when there is the opportunity to instantaneously buy something for a low price and sell it for a higher price.

https://en.wikipedia.org/wiki/Arbitrage

Now this already is the second version of the tool and already 2 years old. See it as some sort of interesting archeological specimem. Please refrain to actually so something harmful with it.

I am writing this down here because apart from it’s obvious horrors this is a good starting point to understand how these computer-trading-systems do work in principle.

Given that an architectural drawing is also included it gives all sorts of starting points to thoughts.

Also. What could possibly go wrong if a tool to buy/sell on actual markets with actual bitcoins is confident enough to include the “maxTargetProfit” configuration option. Effectively setting the top-line of profit you’re going to make!!!111

Linux mac80211 compatible full-stack Wi-Fi design based on SDR

In a tweet we were given an early christmas present – open-sdr released an open source software Wi-Fi stack that utilizes software-defined-radio technology to implement actual working Wi-Fi.

Features:

  • 802.11a/g; 802.11n MCS 0~7; 20MHz
  • Mode tested: Ad-hoc; Station; AP
  • DCF (CSMA/CA) low MAC layer in FPGA
  • Configurable channel access priority parameters:
    • duration of RTS/CTS, CTS-to-self
    • SIFS/DIFS/xIFS/slot-time/CW/etc
  • Time slicing based on MAC address
  • Easy to change bandwidth and frequency:
    • 2MHz for 802.11ah in sub-GHz
    • 10MHz for 802.11p/vehicle in 5.9GHz
  • On roadmap: 802.11ax

See this demonstration:

about brains and silicon wafers

Please read this first paragraph and let it settle:

At the core of the BrainScaleS wafer-scale hardware system (see Figure 90) is an uncut wafer built from mixed-signal ASICs [1], named High Input Count Analog Neural Network chips (HICANNs), which provide a highly configurable substrate that physically emulates adaptively spiking neurons and dynamic synapses (Schemmel et al. (2010)Schemmel et al. (2008)).

I’ve highlighted in bold the portion that I want you to think about once more. We are not talking about chips, dies or cut-up wafers.

We are talking about real-size, huge, fully developed wafers filled with logic. For the sole purpose of brain scale neural network research and development…

The Neuromorphic Computing Platform allows neuroscientists and engineers to perform experiments with configurable neuromorphic computing systems. The platform provides two complementary, large-scale neuromorphic systems built in custom hardware at locations in Heidelberg, Germany (the “BrainScaleS” system, also known as the “physical model” or PM system) and Manchester, United Kingdom (the “SpiNNaker” system, also known as the “many core” or MC system). Both systems enable energy-efficient, large-scale neuronal network simulations with simplified spiking neuron models. The BrainScaleS system is based on physical (analogue) emulations of neuron models and offers highly accelerated operation (104 x real time). The SpiNNaker system is based on a digital many-core architecture and provides real-time operation.

https://electronicvisions.github.io/hbp-sp9-guidebook/index.html

time/space synchronization symbols, AGC training preamble, Viterbi detection/equalization, LDPC decoding and MIMO

Of course this post is talking about hard disks. The ones with spinning disks and read/write heads flying very close to the spinning disks surface.

There are several links to the source papers and works discussing the findings – take look into this nice rabbit hole:

Drag and drop ML with transparency

The machine-learning tooling is getting better. Take a look at Perceptilabs:

Fast modeling
With our drag and drop GUI we enable fast model development.

Increased transparency
The statistical dashboard increases the model’s transparency during training.
Get a better understanding of your model with instant feedback on the operations outputs.
We enable fast error debugging with our custom code editor.

Flexibility
Full flexible options for plugins and importing. Execute any custom Python code in our code editor.

DIRECTIVE 2009/24/EC – Article 6 – Decompilation

Article 6
Decompilation

  1. The authorisation of the rightholder shall not be required
    where reproduction of the code and translation of its form
    within the meaning of points (a) and (b) of Article 4(1) are
    indispensable to obtain the information necessary to achieve
    the interoperability of an independently created computer
    program with other programs, provided that the following
    conditions are met:

    (a) those acts are performed by the licensee or by another
    person having a right to use a copy of a program, or on
    their behalf by a person authorised to do so;

    (b) the information necessary to achieve interoperability has not
    previously been readily available to the persons referred to
    in point (a); and

    (c) those acts are confined to the parts of the original program
    which are necessary in order to achieve interoperability.
  2. The provisions of paragraph 1 shall not permit the information obtained through its application:

    (a) to be used for goals other than to achieve the interoperability of the independently created computer program;

    (b) to be given to others, except when necessary for the interoperability of the independently created computer program;
    or

    (c) to be used for the development, production or marketing of
    a computer program substantially similar in its expression,
    or for any other act which infringes copyright.
  3. In accordance with the provisions of the Berne
    Convention for the protection of Literary and Artistic Works,
    the provisions of this Article may not be interpreted in such a
    way as to allow its application to be used in a manner which
    unreasonably prejudices the rightholder’s legitimate interests or
    conflicts with a normal exploitation of the computer program.

Original in english and german.

Tabemono – from a name to UX and UI…

As you might know by now I am re-implementing MyFitnessPal functionality into my own application to be deeper integrated with kitchen hardware and my own personal use-cases rather than to be an add infested subscription based 3rd party applilcation.

So the development of this is ongoing, but I wanted to note down some progress and explanation.

Let’s start with explaining the name: Tabemono.

It does really mean something – and as some might have guessed – in japanese:

Tabemono – 食べ物

Taking just the first Kanji:

Implementing the UI from the UX has proven to be as challenging as expected.

When we started to toss around the idea of re-implementing our food-tracking-needs we started with a simple scribble on post-it notes.

This quickly led to a digital version of this to better reflect what we wanted to happen during the different steps of use…

It wasn’t nice but it did act as an reminder of what we wanted to achieve.

The first thing we learned here was that this will all evolve while we are working on it.

So during a long international flight I’ve spent the better part of 11 hours on getting the above drawing into something resembling an iOS user interface mock-up. With the help of the (free for 1 private project) Adobe XD I clicked along and after 10 hours, this was the video I did of the click-dummy:

Since then I’ve spend maybe 1 more day and started the SwiftUI based implementation of the actual iOS application.

And this brought the first revelation: There are so many ideas that might make sense on paper and in a click-dummy. But only because those are just tools and not reality. It’s absolutely crucial to really DO the things rather than imagine them.

And so the second revelation came: If I had an advise to any product manager or developer out there: Go on and pick a project and try to go full-circle.

You ain’t full stack if you’re missing out on the understanding of the work and skill that your team members have and need.

SwiftUI on the Web

SwiftUI is the new cool kid on the block when it comes to iOS/iPadOS/macOS application development.

As Apple announced SwiftUI early 2019 it’s naturally only focussing on making all the declarative UI goodness available for the Apple operating systems. No non-apple platforms in focus. Naturally.

But there are ways. With the declarative way of creating user interfaces one apparently can simply start to re-implement the UI controls and have them render as HTML / Javascript…

The SwiftWebUI project is aiming to do so:

Unlike some other efforts this doesn’t just render SwiftUI Views as HTML. It also sets up a connection between the browser and the code hosted in the Swift server, allowing for interaction – buttons, pickers, steppers, lists, navigation, you get it all!

In other words: SwiftWebUI is an implementation of (many but not all parts of) the SwiftUI API for the browser.

To repeat the Disclaimer: This is a toy project! Do not use for production. Use it to learn more about SwiftUI and its inner workings.

SwiftWebUI

Making a RISC-V operating system using Rust

As RISC-V progressively challenges the existing ARM processor ecosystem it’s interesting to see more and more software projects popping up that aim that RISC-V architecture.

Here’s one project that aims to develop (and explain along the way) how to create an operating system from scratch. On top of the RISC-V specifics this tutorial also aims to teach how this all can be done in a programming language called Rust.

Keep in mind that all of this is done on a baremetal system. No other software is running.

RISC-V (“risk five”) and the Rust programming language both start with an R, so naturally they fit together. In this blog, we will write an operating system targeting the RISC-V architecture in Rust (mostly). If you have a sane development environment for RISC-V, you can skip the setup parts right to bootloading. Otherwise, it’ll be fairly difficult to get started.

This tutorial will progressively build an operating system from start to something that you can show your friends or parents — if they’re significantly young enough. Since I’m rather new at this I decided to make it a “feature” that each blog post will mature as time goes on. More details will be added and some will be clarified. I look forward to hearing from you!

The Adventures of OS

REST-API testing: Reqres

I am back again and developing some smaller APIs for my own use.

As I am learning a new programming language and framework (SwiftUI) and for my little learning project I need to also implement a server backend. Implementing a RESTful service is quite straight-forward but for testing and UI prototyping I actually want to do some testing before really setting up the server side.

To easily test RESTful calls without actually implementing anything I found that Reqres is a quite useful tool to have in the toolbelt:

Apart from some pre-set-up API endpoints with dummy data (like users, …) it also features a request mirror service.

With that you can simply throw a JSON document into the general direction of Reqres and it will put a timestamp on it and return it right away.

Like so:

Odometer for the HUD

Since I am back at developing the Head-Up-Display app I was writing about in February (yeah, mornings got darker again!) I want to leave this nice looking Odometer Javascript library here:

Odometer is a Javascript and CSS library for smoothly transitioning numbers. See the demo page for some examples.

Odometer’s animations are handled entirely in CSS using transforms making them extremely performant, with automatic fallback on older browsers.

odometer

Hack-The-Planet Podcast: Episode 009

replacing MyFitnessPal

Well, it’s about time to do something about MyFitnessPal. In our family we’re using their service by the daily. But just for logging. No reports, no further features used.

But still, we were using it for quite a time now:

almost 5 years logged every-single-day.

The service has started to roll out ads for some time now in their apps. There are only iOS / Android apps available. And a mediocre website.

Just recently they started to announce that their free service will restrict how many years back are going to be stored. From those 5 years we will loose 3.

In addition the whole integration has never gotten to a point where I would have decided to upgrade to the paid premium version. No functionality ever got added. No interfacing with scales, no optimizations for UI/UX, …

But they now reduce the functionalities and service and want me to cough up a bit of money:

I am not generally against subscriptions. But I am not getting 9,99 Euro of value out of the service. A shared google sheet would almost achieve parity. And the price itself is just not value based. For 2 Euro I probably would not feel the urge to move on. For 9,99 (times 2 for 2 accounts) make me move.

So I’ve sat down with my wife and we scribbled up some things we want to have in a replacement. The content and feature-set is agreed. Now I’ll throw up a prototype app.

It’ll be integrated with the MQTT scales. And with the flow we came up with we hopefully will reduce the interactions dramatically over MyFitnessPal. And it’ll never stop saving history. And I’ll learn something new.

using AI to generate human faces from emojis and thumbnails

Back in March 2019 we’d already seen artificial people. Yawn.

Back then a Generative adversarial network (GAN) was used to produce random human faces from scratch. It synthesized human faces out of randomness.

Now take it a step further and input actual small images. Like thumbnails or emojis or else.

And what do you get?

Quite impressive, eh? There’s more after the jump.

Oh and they wrote a paper about it: Progressive Face Super-Resolution via Attention to Facial Landmark

Magnificent app which corrects your previous console command

We all know this. You typed a loooong line of commands in your shell and you made one typo.

That’s the worst.

Now. There’s a command that aims to help:

It is rather simple. But extremely effective.

The Fuck attempts to match the previous command with a rule. If a match is found, a new command is created using the matched rule and executed.

Grab it on github. Install it right away. It went into my toolbelt in an instant.

Wave Function Collapse

I’ve written on this topic before here. And as developers venture more into these generative algorithms it’s all that more fun to see even the intermediate results.

Oskar Stålberg writes about his little experiments and bigger libraries on Twitter. The above short demonstration was created by him.

Especially worth a look is the library he made available on GitHub: mxgmn/WaveFunctionCollapse.

Some more context, of questionable helpfulness:

In quantum mechanicswave function collapse occurs when a wave function—initially in a superposition of several eigenstates—reduces to a single eigenstate due to interaction with the external world. This interaction is called an “observation”. It is the essence of a measurement in quantum mechanics which connects the wave function with classical observables like position and momentum. Collapse is one of two processes by which quantum systems evolve in time; the other is the continuous evolution via the Schrödinger equation. Collapse is a black box for a thermodynamically irreversible interaction with a classical environment. Calculations of quantum decoherence predict apparent wave function collapse when a superposition forms between the quantum system’s states and the environment’s states. Significantly, the combined wave function of the system and environment continue to obey the Schrödinger equation.

Wikipedia: WFC

Right. Well. Told you. Here are some nice graphics of this applied to calm you: