We’ve got several quite big fish tanks in our house. Mainly used by freshwater turtles.
These turtles need to be fed every once in a while. And while this is not an issue normally it’s an issue if you leave the house for travel for an extended period of time.
Of course there are humans checking on everything in the house regularly but as much as can be automated should and will be automated in our household. So the requirement wa to have the turtle feeding automated.
To achieve this is would be necessary to have a fixed amount of turtle food be dispensed into the tanks on a plan and with some checks in the background (like water quality and such).
It’s been quite a hassle to come up with a plan how the hardware should look like and work. And ultimately i’ve settled on retrofitting an off-the-shelf fish pond feeder to become controllable through MQTT.
The pond feeder I’ve found and used is this one:
It’s not really worth linking to a specific product detail page as this sort of feeder is available under hundreds of different names. It always looks the same and is priced right around the same.
If you want to build this yourself, you want one that looks like the above. I’ve bought 3 of them and they all seem to come out of the same factory somewhere in China.
Anyway. If you got one you can easily open it up and start modifying it.
The functional principle of the feeder is rather simple:
turn the feeder wheel
take the micro-switch status in account – when it’s pressed down the wheel must be pushing against it
turn it until the micro-switch is not pressed anymore
turn some more until it’s pressed again
Simple. Since the switch-status is not known on power loss / reboot a calibration run is necessary (even with the factory electronics) every time it boots up.
After opening the feeder I’ve cut the two cables going to the motor as well as the micro-switch cables. I’ve added a 4-Pin JST-XH connector to both ends. So I can reconnect it to original state if desired.
These are all the parts needed:
I am using a Wemos D1 Mini and a couple of additional components apart from the prototype board:
A PN2222 NPN transistor, a rectifier diode 1N4007 and a 220 Ohm resistor.
I’ve connected everything according to this schematic I’ve drawn with Fritzing:
I’ve then prototyped away and put everything on the PCB. Of course with very limited solderig skill:
As you can see the JST-XH connector on Motor+Switch can now be connected easily to the PCB with all the parts.
Make sure you check polarity and that you did correctly hook up the motor and switch.
When done correctly the PCB (I’ve used 40mm x 60mm prototype pcb) and all cables will fit into the case. There’s plenty of room and I’ve put it to the side of it. I’ve also directly connected an USB cable to the USB port of the Wemos D1 Mini. As long as you put at least 1A into it it will all work.
Since the Wemos D1 Mini sports an ESP8266 and is well supported by Arduino it was clear to me to use Arduino IDE for the software portion of this project.
To get everything running you need to modify the .ino file in the src folder like so:
What you need to configure:
the output pins you have chosen – D1+D2 are pre-configured
WiFi SSID + PASS
MQTT Server (IP(+Username+PW))
MQTT Topic prefix
Commands that can be sent through mqtt to the /feed topic.
MQTT topics and control
There are overall two MQTT topics:
$prefix/feeder-$chipid/state This topic will hold the current state of the feeder. It will show a number starting from 0 up. When the feeder is ready it will be 0. When it’s currently feeding it will be 1 and up – counting down for every successfull turn done. There is an safety cut-off for the motor. If the motor is longer active than configured in the MaximumMotorRuntime variable it will shut-off by itself and set the state to -1.
$prefix/feeder-$chipid/feed This topic acts as the command topic to start / control the feeding process. If you want to start the process you would send the number of turns you want to happen. So 1 to 5 seems reasonable. The feeder will show the progress in the /state topic. You can update the amount any time to shorten / lengthen the process. On the very first feed request after initial power-up / reboot the feeder will do a calibration run. This is to make sure that all the wheels are in the right position to work flawlessly.
TIL that I could do something which I assumed everybody could do. I could make me hear a roaring thunder sound by flexing a muscle I did not know until now.
It’s quite interesting. The muscle is named “Tensor tympani” and it’s here:
The tensor tympani acts to dampen the noise produced by chewing. When tensed, the muscle pulls the malleus medially, tensing the tympanic membrane and damping vibration in the ear ossicles and thereby reducing the perceived amplitude of sounds.
So the eye has an Iris to control how much light makes it in. The ear has this muscle to dampen too loud sounds. And apparently not everyone is able to willingly control it. Bummer!
Contracting muscles produce vibration and sound.Slow twitch fibers produce 10 to 30 contractions per second (equivalent to 10 to 30 Hz sound frequency). Fast twitch fibers produce 30 to 70 contractions per second (equivalent to 30 to 70 Hz sound frequency). The vibration can be witnessed and felt by highly tensing one’s muscles, as when making a firm fist. The sound can be heard by pressing a highly tensed muscle against the ear, again a firm fist is a good example. The sound is usually described as a rumbling sound.
Some individuals can voluntarily produce this rumbling sound by contracting the tensor tympani muscle of the middle ear. The rumbling sound can also be heard when the neck or jaw muscles are highly tensed as when yawning deeply. This phenomenon has been known since (at least) 1884.
Augmented Reality – AR – is getting some buzz here and there throughout the last 20 years almost. With hardware becoming more powerful and optics+light hardware becoming cheaper and more efficient it’s still all but close to become widely used and available.
Many refer to some one-trick pony feature in location-based games like “Pokemon Go” to being “AR”. But actual useful cases of AR are there but not feasible with current hardware generations.
Nevertheless a team in california has taken our the scissors and keyboards and made HoloKit:
HoloKit features super sharp optics quality and a 76-degree diagonal field of view. Pairing with a smartphone, HoloKit can perform an inside out tracking function, which uses the changing perspective on the outside world to note changes in its own position. HoloKit merges the real and the virtual in a smart way. While you see through the real world, virtual objects are blended into it. Powered by the accurate gyro and camera on smart phones, HoloKit solidly places virtual objects onto your table or floor, as if they were physically there without physical makers. These virtual objects will stay in the same place even if you walk away, just like real physical objects.
HoloKit is different from screen-based AR experience like Tango. You can directly see through the headset and view the real world as is, and in the meantime the virtual objects are projected on top of the real world, as opposed to viewing both the real and the virtual through a smartphone camera.
Browsers can do many things. It’s probably your main window into the vast internet. Lots of things need visualization. And if you want to know how it’s done, maybe do one yourself, then…
So, you want to create amazing data visualizations on the web and you keep hearing about D3.js. But what is D3.js, and how can you learn it? Let’s start with the question: What is D3?
While it might seem like D3.js is an all-encompassing framework, it’s really just a collection of small modules. Here are all of the modules: each is visualized as a circle – larger circles are modules with larger file sizes.
I am having a hard time learning japanese and reading/writing the kanji especially.
Having to write japanese city names frequently (for example when doing searches) I still do remember the spoken out version of the name but I do not quite yet remember the kanji version. Also I do not want to switch back and forth in keyboard languages.
For this, especially in macOS and iOS there is a nice way around this. With the built-in “Text Replacement” feature of your Mac or iPhone/iPad you can easily mass-import a mapping between the romanized version of a word and the japanese written out kanji version of that word.
While you are typing then you will be presented with recommendation text replacements, effectively the kanjis of what you’ve just tried to write.
Unfortunately I do not know a way how to mass-import these text-replacements on iOS.
But if you own a macOS computer and you have it synced over iCloud with your mobile phone or tablet you will likely be able to open the text replacement pane in your system settings and import this plist file into it. Simply drag the file (after unzipping the ZIP file) into the text replacement window.
Download the Tokyo-Text-Replacement.zip file. Extract it (double clicking). And drag the .plist file into the Text Replacement Window.
For you to derive your own files you can find the raw data, a list of all designated Ken and Ward names in Tokyo here:
In Nodes you write programs by connecting “blocks” of code. Each node – as we refer to them – is a self contained piece of functionality like loading a file, rendering a 3D geometry or tracking the position of the mouse. The source code can be as big or as tiny as you like. We’ve seen some of ours ranging from 5 lines of code to the thousands. Conceptual/functional separation is usually more important.
Kind of Bloop is a chiptune tribute to Miles Davis’ Kind of Blue, a track-by-track 8-bit reinterpretation of the bestselling jazz album of all time. Launched as a Kickstarter project in April 2009, only two weeks after Kickstarter itself opened its doors, the album’s production was funded by 419 backers around the world. Kind of Bloop was released on August 17, 2009, on the 50th anniversary of Kind of Blue.
XamariNES is a cross-platform Nintendo Emulator using .Net Standard written in C#. This project started initially as a nighits/weekend project of mine to better understand the MOS 6502 processor in the original Nintendo Entertainment System. The CPU itself didn’t take long working on it a couple hours here and there. I decided once the CPU was completed, how hard could it be just to take it to next step and do the PPU? Here we are a year later and I finally think I have the PPU in a semi-working state.
If you ever wanted to start looking at and understand emulation this might be a starting point for you. With the high-level C# being used to describe and implement actual existing hardware – like the NES CPU:
The author does the full circle and everything you’d expect from a simple working emulator is there:
You might, or might not be aware of my passion for black clothing. I like the simplicity and absence of noise.
Anyway. You might not be aware of the wonderful world of black as-in paint.
Apparently the current record holder in blackness (measured in percent absorption of visible light) is a product called “Vanta Black”.
Vantablack is a material developed by Surrey NanoSystems in the United Kingdom and is one of the darkest substances known, absorbing up to 99.96% of visible light (at 663 nm if the light is perpendicular to the material). The name is a compound of the acronym VANTA (vertically aligned carbon nanotube arrays) and the color black.
After my first stationary trainer broke I bought a new one with the capability to measure wattage and also to apply resistance measured by the watt.
After looking at my average speeds, heart-rates and times on the device I was able to build a quite detailed understanding of the broader picture. What effects my power output and what does not. The effects of nutrition and health to what the body will deliver while being asked the exact same power output curve than the last time.
In a nutshell the numbers tell me that I am usually at a mediocre wattage of 150W constant load doing about 40 km/h average. My reserves usually allow me to go for 1-2 hours without a break doing this.
So far so good. Now I’ve found out from more serious cyclers that there’s something like “Functional Threshold Power“. I do regular have tests at the doctors to check for any heart-rate issues.
Reading about this Functional Threshold Power my curiousity is sparked.
How much could I do? Should I even go for measuring it?
I am running most of my in-house infrastructure based on Docker these days…
Docker is a set of platform-as-a-service (PaaS) products that use operating-system-level virtualization to deliver software in packages called containers.Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels.
All containers are run by a single operating-system kernel and are thus more lightweight than virtual machines.
And given the above definition it’s fairly easy to create and run containers of things like command-line tools and background servers/services. But due to the nature of Docker being “terminal only” by default it’s quite hard to do anything UI related.
But there is a way. By using the VNC protocol to get access to the graphical user interface we can set-up a container running a fully-fledge Linux Desktop and we can connect directly to this container.
I am using something I call “throw-away linux desktop containers” all day every day for various needs and uses. Everytime I start such a container this container is brand-new and ready to be used.
Actually when I start it the process looks like this:
As you can see when the container starts-up it asks for a password to be set. This is the password needed to be entered when the VNC client connects to the container.
And when you are connected, this is what you get:
I am sharing my scripts and Dockerfile with you so you can use it yourself. If you put a bit more time into it you can even customize it to your specific needs. At this point it’s based on Ubuntu 18.04 and starts-up a ubuntu-mate desktop environment in it’s default configuration.
When you log into the container it will log you in as root – but effectively you won’t be able to really screw around with the host machine as the container is still isolating you from the host. Nevertheless be aware that the container has some quirks and is run in extended privileges mode.
Chromium will be pre-installed as a browser but you will find that it won’t start up. That’s because Chromium won’t start up if you attempt a start as root user.
I am using 1Password for years now. It’s a great tool. So far.
As I am using it locally synced across my own infrastructure I feel like I am getting slowly but surely pushed out of their target-customer group. What does that mean?
The current pricing scheme, if you buy new, for 1Password looks like this:
So it’s always going to be a subscription if you want to start with it and if you want it in a straight line.
It used to be a one-time purchase per platform and you could set-up syncing across other cloud services as you saw fit. If you really start from scratch the 1Password apps still give you the option to create and sync locally but the direction is set and clear: they want you to sign up to a subscription.
I am not going to purchase a subscription. With some searching I found a software which is extremely similar to 1Password and fully featured. And is available as 1-time purchase per platform for all platforms I am using.
Also. This one is the first that could import my 1Password export files straight away without any issues. Even One-Time-Passwords (OTP) worked immediately.
The name is Enpass and it’s available for Mac, Windows, Linux, iOS, Android and basically acts as a step in replacement for 1Password. It directly imports what 1Password is exporting. And its pricing is:
Subscriptions for services as this are a no-go for me. It’s a commodity service which I am willing to pay for trailing updates and maintenance every year or so in a major update.
I am not willing to pay a substantial amount of money per user per month to just keep having access to my Passwords. And having them synced onto some companies infrastructure does not make this deal sweeter.
Enpass on the other hand comes with peace-of-mind that no data leaves your infrastructure and that you can get the data in and out any time.
It can import from these:
As mentioned I’ve migrated from 1Password in the mere of minutes and was able to plug-in-replace it immediately.
Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, super-resolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs. Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity.
AI and deep-learning is not always necessary or helpful. In this case impressive results have been achieved without the use of any of the hyped technologies.
In this case you give the algorithms two inputs. A video base that you want to stylize and a base picture that resembles the style you want to achieve.
We introduce a new example-based approach to video stylization, with a focus on preserving the visual quality of the style, user controllability and applicability to arbitrary video. Our method gets as input one or more keyframes that the artist chooses to stylize with standard painting tools. It then automatically propagates the stylization to the rest of the sequence. To facilitate this while preserving visual quality, we developed a new type of guidance for state-of-art patch-based synthesis, that can be applied to any type of video content and does not require any additional information besides the video itself and a user-specified mask of the region to be stylized. We further show a temporal blending approach for interpolating style between keyframes that preserves texture coherence, contrast and high frequency details. We evaluate our method on various scenes from real production setting and provide a thorough comparison with prior art.
Apparently there also is a Windows demo available in which you are supposedly be able to create your own stylized short clips. But as I wanted to try it out it threw a lot of funky messages regarding the application to be specifically untrustworthy / possibly malicious. So be aware and cautious.