Booting a computer does not happen extremely often in most use-cases, yet it’s a field that has not seen as much optimization and development as others had.
NTT DoCoMo might not have been the first ones to release feature phones with actual emoji characters to be used in text messaging. But their set of original emojis is just oh-so-beautiful to look at.
With the help of Monica Dinculescu we now can enjoy these emojis on our modern era computing machines.
You can either get the font downloaded for free directly from Monicas page or you could use her SVG code to further make use of the great emojis.
If you go for the SVG link you will get some overview alike the one at the start of this post. If you wanted to further work with the raw vector data (SVG) in there you could use this simple trick:
Step 1: locate the emoji you want in the code of the page. Maybe by utilizing the developer tools of your browser.
Step 2: Copy that specific element that you want to your clipboard / into a new text document.
Step 3: add the proper header tag before the element you’ve copied.
<?xml version="1.0" encoding="utf-8"?>
Step 4: Save the contents now as a file with the .svg ending. You can now open it up in any SVG compatible editor, like Inkscape.
We’ve got several quite big fish tanks in our house. Mainly used by freshwater turtles.
These turtles need to be fed every once in a while. And while this is not an issue normally it’s an issue if you leave the house for travel for an extended period of time.
Of course there are humans checking on everything in the house regularly but as much as can be automated should and will be automated in our household. So the requirement wa to have the turtle feeding automated.
To achieve this is would be necessary to have a fixed amount of turtle food be dispensed into the tanks on a plan and with some checks in the background (like water quality and such).
It’s been quite a hassle to come up with a plan how the hardware should look like and work. And ultimately i’ve settled on retrofitting an off-the-shelf fish pond feeder to become controllable through MQTT.
The pond feeder I’ve found and used is this one:
It’s not really worth linking to a specific product detail page as this sort of feeder is available under hundreds of different names. It always looks the same and is priced right around the same.
If you want to build this yourself, you want one that looks like the above. I’ve bought 3 of them and they all seem to come out of the same factory somewhere in China.
Anyway. If you got one you can easily open it up and start modifying it.
The functional principle of the feeder is rather simple:
- turn the feeder wheel
- take the micro-switch status in account – when it’s pressed down the wheel must be pushing against it
- turn it until the micro-switch is not pressed anymore
- turn some more until it’s pressed again
Simple. Since the switch-status is not known on power loss / reboot a calibration run is necessary (even with the factory electronics) every time it boots up.
After opening the feeder I’ve cut the two cables going to the motor as well as the micro-switch cables. I’ve added a 4-Pin JST-XH connector to both ends. So I can reconnect it to original state if desired.
These are all the parts needed:
I am using a Wemos D1 Mini and a couple of additional components apart from the prototype board:
A PN2222 NPN transistor, a rectifier diode 1N4007 and a 220 Ohm resistor.
I’ve connected everything according to this schematic I’ve drawn with Fritzing:
I’ve then prototyped away and put everything on the PCB. Of course with very limited solderig skill:
As you can see the JST-XH connector on Motor+Switch can now be connected easily to the PCB with all the parts.
Make sure you check polarity and that you did correctly hook up the motor and switch.
When done correctly the PCB (I’ve used 40mm x 60mm prototype pcb) and all cables will fit into the case. There’s plenty of room and I’ve put it to the side of it. I’ve also directly connected an USB cable to the USB port of the Wemos D1 Mini. As long as you put at least 1A into it it will all work.
Since the Wemos D1 Mini sports an ESP8266 and is well supported by Arduino it was clear to me to use Arduino IDE for the software portion of this project.
Of course everything, from schematics to the sourcecode is available as open source.
To get everything running you need to modify the .ino file in the src folder like so:
What you need to configure:
- the output pins you have chosen – D1+D2 are pre-configured
- WiFi SSID + PASS
- MQTT Server (IP(+Username+PW))
- MQTT Topic prefix
Commands that can be sent through mqtt to the /feed topic.
There are overall two MQTT topics:
This topic will hold the current state of the feeder. It will show a number starting from 0 up. When the feeder is ready it will be 0. When it’s currently feeding it will be 1 and up – counting down for every successfull turn done. There is an safety cut-off for the motor. If the motor is longer active than configured in the MaximumMotorRuntime variable it will shut-off by itself and set the state to -1.
This topic acts as the command topic to start / control the feeding process. If you want to start the process you would send the number of turns you want to happen. So 1 to 5 seems reasonable. The feeder will show the progress in the /state topic. You can update the amount any time to shorten / lengthen the process. On the very first feed request after initial power-up / reboot the feeder will do a calibration run. This is to make sure that all the wheels are in the right position to work flawlessly.
So if you want to make it start feeding 3 times:
mosquitto_pub -t house/stappenbach/feeder/feeder-00F3B839/feed -m 3
And if you want to see the state of the feeder:
mosquitto_sub -v -t house/stappenbach/feeder/feeder-00F3B839/state
All in all there are 3 of these going to be running in our household and the feeding is going to be controlled either by Alexa voice commands or through Node-Red automation.
I am still working on it – but it is coming together nicely. During the next vacation our fish tanks are going to be well fed.
Augmented Reality – AR – is getting some buzz here and there throughout the last 20 years almost. With hardware becoming more powerful and optics+light hardware becoming cheaper and more efficient it’s still all but close to become widely used and available.
Many refer to some one-trick pony feature in location-based games like “Pokemon Go” to being “AR”. But actual useful cases of AR are there but not feasible with current hardware generations.
Nevertheless a team in california has taken our the scissors and keyboards and made HoloKit:
HoloKit features super sharp optics quality and a 76-degree diagonal field of view. Pairing with a smartphone, HoloKit can perform an inside out tracking function, which uses the changing perspective on the outside world to note changes in its own position. HoloKit merges the real and the virtual in a smart way. While you see through the real world, virtual objects are blended into it. Powered by the accurate gyro and camera on smart phones, HoloKit solidly places virtual objects onto your table or floor, as if they were physically there without physical makers. These virtual objects will stay in the same place even if you walk away, just like real physical objects.https://holokit.io/
HoloKit is different from screen-based AR experience like Tango. You can directly see through the headset and view the real world as is, and in the meantime the virtual objects are projected on top of the real world, as opposed to viewing both the real and the virtual through a smartphone camera.
Browsers can do many things. It’s probably your main window into the vast internet. Lots of things need visualization. And if you want to know how it’s done, maybe do one yourself, then…
And to further learn what it’s all about, go to Amelia Wattenbergers blog and take a stroll:
So, you want to create amazing data visualizations on the web and you keep hearing about D3.js. But what is D3.js, and how can you learn it? Let’s start with the question: What is D3?An Introduction to D3.js
While it might seem like D3.js is an all-encompassing framework, it’s really just a collection of small modules. Here are all of the modules: each is visualized as a circle – larger circles are modules with larger file sizes.
Augmented Reality needs proper 3D geometry and the ability to sense the environment to interact with it. At some point I would expect tools to show up that allow us to do some of this ourselves.
Seems like we’re one step closer. Ubiquity6 is reaching out to get early access to interested users:
We’re giving early access to our 3D mapping tools for creators and artists! If you’re interested in trying it out sign up for early access here: https://ubiquity6.typeform.com/to/bmpbkBUbiquity6 on Twitter
Of course. I applied. And I’ve just started testing.
I am having a hard time learning japanese and reading/writing the kanji especially.
Having to write japanese city names frequently (for example when doing searches) I still do remember the spoken out version of the name but I do not quite yet remember the kanji version. Also I do not want to switch back and forth in keyboard languages.
For this, especially in macOS and iOS there is a nice way around this. With the built-in “Text Replacement” feature of your Mac or iPhone/iPad you can easily mass-import a mapping between the romanized version of a word and the japanese written out kanji version of that word.
While you are typing then you will be presented with recommendation text replacements, effectively the kanjis of what you’ve just tried to write.
Unfortunately I do not know a way how to mass-import these text-replacements on iOS.
But if you own a macOS computer and you have it synced over iCloud with your mobile phone or tablet you will likely be able to open the text replacement pane in your system settings and import this plist file into it. Simply drag the file (after unzipping the ZIP file) into the text replacement window.
Download the Tokyo-Text-Replacement.zip file. Extract it (double clicking). And drag the .plist file into the Text Replacement Window.
For you to derive your own files you can find the raw data, a list of all designated Ken and Ward names in Tokyo here:
In Nodes you write programs by connecting “blocks” of code. Each node – as we refer to them – is a self contained piece of functionality like loading a file, rendering a 3D geometry or tracking the position of the mouse. The source code can be as big or as tiny as you like. We’ve seen some of ours ranging from 5 lines of code to the thousands. Conceptual/functional separation is usually more important.Nodes.io
*(not to be confused with node.js)
It’s been a year since Zenvent posted this:
A Hackintosh (a portmanteau of “Hack” and “Macintosh”), is a computer that runs macOS on a device not authorized by Apple, or one that no longer receives official software updates.https://en.wikipedia.org/wiki/Hackintosh
Kind of Bloop is a chiptune tribute to Miles Davis’ Kind of Blue, a track-by-track 8-bit reinterpretation of the bestselling jazz album of all time.
Launched as a Kickstarter project in April 2009, only two weeks after Kickstarter itself opened its doors, the album’s production was funded by 419 backers around the world.
Kind of Bloop was released on August 17, 2009, on the 50th anniversary of Kind of Blue.
Download at the link.
XamariNES is a cross-platform Nintendo Emulator using .Net Standard written in C#. This project started initially as a nighits/weekend project of mine to better understand the MOS 6502 processor in the original Nintendo Entertainment System. The CPU itself didn’t take long working on it a couple hours here and there. I decided once the CPU was completed, how hard could it be just to take it to next step and do the PPU? Here we are a year later and I finally think I have the PPU in a semi-working state.XamaiNES
If you ever wanted to start looking at and understand emulation this might be a starting point for you. With the high-level C# being used to describe and implement actual existing hardware – like the NES CPU:
The author does the full circle and everything you’d expect from a simple working emulator is there:
I am running most of my in-house infrastructure based on Docker these days…
Docker is a set of platform-as-a-service (PaaS) products that use operating-system-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels.
All containers are run by a single operating-system kernel and are thus more lightweight than virtual machines.Wikipedia: Docker
And given the above definition it’s fairly easy to create and run containers of things like command-line tools and background servers/services. But due to the nature of Docker being “terminal only” by default it’s quite hard to do anything UI related.
But there is a way. By using the VNC protocol to get access to the graphical user interface we can set-up a container running a fully-fledge Linux Desktop and we can connect directly to this container.
I am using something I call “throw-away linux desktop containers” all day every day for various needs and uses. Everytime I start such a container this container is brand-new and ready to be used.
Actually when I start it the process looks like this:
As you can see when the container starts-up it asks for a password to be set. This is the password needed to be entered when the VNC client connects to the container.
And when you are connected, this is what you get:
I am sharing my scripts and Dockerfile with you so you can use it yourself. If you put a bit more time into it you can even customize it to your specific needs. At this point it’s based on Ubuntu 18.04 and starts-up a ubuntu-mate desktop environment in it’s default configuration.
When you log into the container it will log you in as root – but effectively you won’t be able to really screw around with the host machine as the container is still isolating you from the host. Nevertheless be aware that the container has some quirks and is run in extended privileges mode.
Chromium will be pre-installed as a browser but you will find that it won’t start up. That’s because Chromium won’t start up if you attempt a start as root user.
A lot means, a lot:
To have a chance to get on top of things and save space, try this:
npm i -g npkill
By then using npkill you will get an overview (after a looong scan) of how much disk space there is to be saved.
I am using 1Password for years now. It’s a great tool. So far.
As I am using it locally synced across my own infrastructure I feel like I am getting slowly but surely pushed out of their target-customer group. What does that mean?
The current pricing scheme, if you buy new, for 1Password looks like this:
So it’s always going to be a subscription if you want to start with it and if you want it in a straight line.
It used to be a one-time purchase per platform and you could set-up syncing across other cloud services as you saw fit. If you really start from scratch the 1Password apps still give you the option to create and sync locally but the direction is set and clear: they want you to sign up to a subscription.
I am not going to purchase a subscription. With some searching I found a software which is extremely similar to 1Password and fully featured. And is available as 1-time purchase per platform for all platforms I am using.
Also. This one is the first that could import my 1Password export files straight away without any issues. Even One-Time-Passwords (OTP) worked immediately.
The name is Enpass and it’s available for Mac, Windows, Linux, iOS, Android and basically acts as a step in replacement for 1Password. It directly imports what 1Password is exporting. And its pricing is:
Subscriptions for services as this are a no-go for me. It’s a commodity service which I am willing to pay for trailing updates and maintenance every year or so in a major update.
I am not willing to pay a substantial amount of money per user per month to just keep having access to my Passwords. And having them synced onto some companies infrastructure does not make this deal sweeter.
Enpass on the other hand comes with peace-of-mind that no data leaves your infrastructure and that you can get the data in and out any time.
It can import from these:
As mentioned I’ve migrated from 1Password in the mere of minutes and was able to plug-in-replace it immediately.
So I leave this right here:
It really does implement a lot of what an operating system UI and portions of the backends are supposed to be. It looks quite funky and there are applications to this. Of course it’s open source
I want all electron Apps to start existing there so I can call all of them with just a browser from anywhere.
If you want to give it a spin, click here:
Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, super-resolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs.Deep Image Prior Paper
Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity.
AI and deep-learning is not always necessary or helpful. In this case impressive results have been achieved without the use of any of the hyped technologies.
In this case you give the algorithms two inputs. A video base that you want to stylize and a base picture that resembles the style you want to achieve.
We introduce a new example-based approach to video stylization, with a focus on preserving the visual quality of the style, user controllability and applicability to arbitrary video. Our method gets as input one or more keyframes that the artist chooses to stylize with standard painting tools. It then automatically propagates the stylization to the rest of the sequence. To facilitate this while preserving visual quality, we developed a new type of guidance for state-of-art patch-based synthesis, that can be applied to any type of video content and does not require any additional information besides the video itself and a user-specified mask of the region to be stylized. We further show a temporal blending approach for interpolating style between keyframes that preserves texture coherence, contrast and high frequency details. We evaluate our method on various scenes from real production setting and provide a thorough comparison with prior art.Stylizing Video by Example Paper
Apparently there also is a Windows demo available in which you are supposedly be able to create your own stylized short clips. But as I wanted to try it out it threw a lot of funky messages regarding the application to be specifically untrustworthy / possibly malicious. So be aware and cautious.
Does not happen often. And was not happening for long. Actually I like those “service down”-messages from several websites. Anyone remembers the Fail Whale?
Maybe you want to give EasyEDA a try as it’s in-browser experience is better than anything I had come across so far. Granted I am not doing PCBs regularly but nevertheless – whenever I tried with the programs I’ve got recommended it wasn’t as straight forward as it is with this tool.
The image [..] is a visual/artistic experiment playing with simultanous contrast resulting from other experiments these days. An over-saturated colored grid overlayed on a grayscale image causes the grayscale cells to be perceived as having color.original article
The processing needed to create the above image happened along with unrelated but significant code improvements In the last couple of weeks. I have been visiting mitch – a prolific GIMP contributors for collaboration – and lots of progress has been – and is still – being made on babl, GEGL and GIMP.
As people around me discuss what to go for in regards to manage their growing number of private GIT repositories I joined their discussion.
A couple of years ago I assessed how I would want to store my collection of almost 100 GIT private repositories and all those cloned mirrors I want to keep for archival and sentimental reasons.
All seemed not desirable. Like chaining my workflows to GitHub as a provider or adopting a new hobby to operate and maintain a private GitLab server. And as it might have become easier to operate a GitLab server with the introduction of container management systems. But I’ve always seemed to have to update to a new version when I actually wanted to use it.
So this was when I had to make the call for my own set-up about 4 years ago. We were using a rather well working GitLab set-up for work back then. But it all seemed overkill to me also back then.
So I found: gogs.io
It runs with one command, the only dependency is two file system directories with (a) the settings of gogs and (b) your repositories.
It’ll deploy as literally a SINGLE BINARY without any other things to consider. With the provided dockerfile you are up and running in seconds.
It has never let me down. It’s running and providing it’s service. And that’s the end of it.
I am using it, as said, for 95 private repositories and a lot of additionally mirrored GIT repositories. Gogs will support you by keeping those mirrors in sync for you in the background. It’s even multi-user multi-organization.
Let me introduce you to a wonderful concept. We are using these movies as backdrop when on the stepper or spinning, essentially when doing sports or as a screensaver that plays whenever nothing else is playing on the screens around the house.
What is it you ask?
The thing I am talking about is: Walking Videos! Especially from people who walk through Tokyo / Japan. And there are lots of them!
Think of it as a relaxing walk around a neighborhood you might not know. Take in the sounds and sights and enjoy. That’s the idea of it.
If you want the immediate experience, try this:
Of course there are a couple of different such YouTube channels waiting for your subscription. The most prominent ones I know are:
In addition to attract your interest there’s a map with recent such walks in Tokyo. So you can specifically pick a walk you want to see by a map!
When you are writing code the patterns seem to repeat every once in a while. Not only the patterns but also the occasion you are going to apply certain code styles and methods while developing.
To support a developer with this creative work the tedious and repetitious tasks of typing out what is thought can be supported by machine learning.
Chances are your favourite IDE already supports an somehow AI driven code autocomplete feature. And if it does not, read on as there are ways to integrate products like TabNine into any editor you can think of…
Visual Studio IntelliCode is a set of AI-assisted capabilities that improve developer productivity with features like contextual IntelliSense, argument completion, code formatting, and style rule inference.
Of course there are some new contenders to the scene, like TabNine:
TL;DR: TabNine is an autocompleter that helps you write code faster. We’re adding a deep learning model which significantly improves suggestion quality. You can see videos below and you can sign up for it here.TabNine
Deep TabNine requires a lot of computing power: running the model on a laptop would not deliver the low latency that TabNine’s users have come to expect. So we are offering a service that will allow you to use TabNine’s servers for GPU-accelerated autocompletion. It’s called TabNine Cloud, …TabNine
For .NET there’s one too. It supports C#, F# and VB.NET. For trying something quickly or sharing it online this is a nice way to do it:
Preserving old software is all about storing it and keeping it running.
With the most important part being the later one. The best way to keep things running is by emulating the old and obsolete hardware as accurate as possible.
In computing, an emulator is hardware or software that enables one computer system (called the host) to behave like another computer system (called the guest). An emulator typically enables the host system to run software or use peripheral devices designed for the guest system. Emulation refers to the ability of a computer program in an electronic device to emulate (or imitate) another program or device.Wikipedia: Emulator
There are a lot of different types of emulators for all sorts of purposes.
There’s things like bochs which is effectively emulating the hardware of a PC on chip-level and can run virtually anywhere:
Bochs is a highly portable open source IA-32 (x86) PC emulator written in C++, that runs on most popular platforms. It includes emulation of the Intel x86 CPU, common I/O devices, and a custom BIOS. Bochs can be compiled to emulate many different x86 CPUs, from early 386 to the most recent x86-64 Intel and AMD processors which may even not reached the market yet.bochs: the Open Source IA-32 Emulation Project
Emulators of game consoles are alike that – they are emulating the whole system hardware and are able to run original and unchanged code by replicating the exact hardware. Sometimes more and sometimes less exactly.
Hardware emulation in itself an extremely interesting field of software engineering. There’s the hard way to emulate everything accurately (and slowly) by doing what the actual old hardware would have done but maybe in software (or even in replicated hardware).
In regards of old game console hardware there are even now specialized distributions of lots of hardware/system emulators available for specific and readily available hardware like the RaspberryPi. Some of them recently have gotten some nice updates.
There are many connectors out in the world. A lot of them are old but get still used. And once every while you might need an actual great drawing / scheme of such a connector.
There’s a place for all your needs and curiosity. It’s bitsavers.org.
Just take a look at this drawing and cut-away of a coax connector from the 1976 AMP product catalog:
There is lots more, just take a look at bitsavers. Especially the software bits archive of (very) old computers software and sources. Just wow.
The firecracker exploded. Apparently after 2 weeks of usage of the Chuwi Hi10 Air the eMMC flash is malfunctioning.
In a totally strange way: Every byte on the eMMC can be read, seemingly. Even Windows 10 boots. But after a while it will hang and blue screen. Apparently because it tries to write to the eMMC and when those writes fail and pile up in the caches at some point the system calls it quits.
Anyhow: It means that no byte that is right now on this eMMC can be deleted / overwritten but only be read.
The great chinese support is really helpful and offered to replace the device free of charge right away. That’s very nice! But I came to the conclusion that I cannot send the device in, because:
It contains a full set of synched private data that I cannot remove by all means because the freaking soldered-on eMMC flash is broken.
The recipient of this broken tablet in china would be able to read all my data and I could not do anything about it.
Only an extremely small fraction of data is on there unencrypted. Only that much I hadn’t yet switched on encryption on during the initial set-up I was still doing on the device. And that little piece of data already is what won’t let me send out the device.
Now, what can we learn from this? We can learn: Never ever ever work with anything, even during set-up, without full encryption.
As I am mainly producing text and markdown notes throughout the day I am always interested in ways to quickly create simple text-based flow-charts.
You open it in your browser and start drawing with the simple tools provided.
When you are done you export it to plain text and do what you feel like with it.
Here’s a feature overview: