I am still working on it – but it is coming together nicely. During the next vacation our fish tanks are going to be well fed.
TIL that I could do something which I assumed everybody could do. I could make me hear a roaring thunder sound by flexing a muscle I did not know until now.
It’s quite interesting. The muscle is named “Tensor tympani” and it’s here:
The tensor tympani acts to dampen the noise produced by chewing. When tensed, the muscle pulls the malleus medially, tensing the tympanic membrane and damping vibration in the ear ossicles and thereby reducing the perceived amplitude of sounds.https://en.wikipedia.org/wiki/Tensor_tympani_muscle#Voluntary_control
So the eye has an Iris to control how much light makes it in. The ear has this muscle to dampen too loud sounds. And apparently not everyone is able to willingly control it. Bummer!
Contracting muscles produce vibration and sound. Slow twitch fibers produce 10 to 30 contractions per second (equivalent to 10 to 30 Hz sound frequency). Fast twitch fibers produce 30 to 70 contractions per second (equivalent to 30 to 70 Hz sound frequency). The vibration can be witnessed and felt by highly tensing one’s muscles, as when making a firm fist. The sound can be heard by pressing a highly tensed muscle against the ear, again a firm fist is a good example. The sound is usually described as a rumbling sound.https://en.wikipedia.org/wiki/Tensor_tympani_muscle
Some individuals can voluntarily produce this rumbling sound by contracting the tensor tympani muscle of the middle ear. The rumbling sound can also be heard when the neck or jaw muscles are highly tensed as when yawning deeply. This phenomenon has been known since (at least) 1884.
Interesting theories not started in my head. As I am very sensitive to chewing noises of all sorts – either produced by myself or by others. This could give an explanation to why.
Now excuse me, I need to flex this muscle and make the thunder roar!
Augmented Reality – AR – is getting some buzz here and there throughout the last 20 years almost. With hardware becoming more powerful and optics+light hardware becoming cheaper and more efficient it’s still all but close to become widely used and available.
Many refer to some one-trick pony feature in location-based games like “Pokemon Go” to being “AR”. But actual useful cases of AR are there but not feasible with current hardware generations.
Nevertheless a team in california has taken our the scissors and keyboards and made HoloKit:
HoloKit features super sharp optics quality and a 76-degree diagonal field of view. Pairing with a smartphone, HoloKit can perform an inside out tracking function, which uses the changing perspective on the outside world to note changes in its own position. HoloKit merges the real and the virtual in a smart way. While you see through the real world, virtual objects are blended into it. Powered by the accurate gyro and camera on smart phones, HoloKit solidly places virtual objects onto your table or floor, as if they were physically there without physical makers. These virtual objects will stay in the same place even if you walk away, just like real physical objects.https://holokit.io/
HoloKit is different from screen-based AR experience like Tango. You can directly see through the headset and view the real world as is, and in the meantime the virtual objects are projected on top of the real world, as opposed to viewing both the real and the virtual through a smartphone camera.
Browsers can do many things. It’s probably your main window into the vast internet. Lots of things need visualization. And if you want to know how it’s done, maybe do one yourself, then…
And to further learn what it’s all about, go to Amelia Wattenbergers blog and take a stroll:
So, you want to create amazing data visualizations on the web and you keep hearing about D3.js. But what is D3.js, and how can you learn it? Let’s start with the question: What is D3?An Introduction to D3.js
While it might seem like D3.js is an all-encompassing framework, it’s really just a collection of small modules. Here are all of the modules: each is visualized as a circle – larger circles are modules with larger file sizes.
The demon core was a spherical 6.2-kilogram (14 lb) subcritical mass of plutonium 89 millimetres (3.5 in) in diameter, that was involved in two criticality accidents, on August 21, 1945 and May 21, 1946.Wikipedia: Demon core
Now you can have fun without the death-risk in the comfort of your home.
Meet the party-core:
If you’re interested in this topic I can recommend a book:
Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima
A “delightfully astute” and “entertaining” history of the mishaps and meltdowns that have marked the path of scientific progress (Kirkus Reviews, starred review).
Augmented Reality needs proper 3D geometry and the ability to sense the environment to interact with it. At some point I would expect tools to show up that allow us to do some of this ourselves.
Seems like we’re one step closer. Ubiquity6 is reaching out to get early access to interested users:
We’re giving early access to our 3D mapping tools for creators and artists! If you’re interested in trying it out sign up for early access here: https://ubiquity6.typeform.com/to/bmpbkBUbiquity6 on Twitter
Of course. I applied. And I’ve just started testing.
Have you ever wanted a full control over your communication tool ? #SnapOnAir #BlaspBerry v2. A true Qwerty computer KB. @Raspberry_Pipwav robot on Twitter
zero W. @Quectel
3G cellular chip. #Lora RFM95 chip. All opensource.
There’s a full twitter thread here. More pictures, more information.
And there’s a GitHub repository with some schematics, configurations and so on…
I am having a hard time learning japanese and reading/writing the kanji especially.
Having to write japanese city names frequently (for example when doing searches) I still do remember the spoken out version of the name but I do not quite yet remember the kanji version. Also I do not want to switch back and forth in keyboard languages.
For this, especially in macOS and iOS there is a nice way around this. With the built-in “Text Replacement” feature of your Mac or iPhone/iPad you can easily mass-import a mapping between the romanized version of a word and the japanese written out kanji version of that word.
While you are typing then you will be presented with recommendation text replacements, effectively the kanjis of what you’ve just tried to write.
Unfortunately I do not know a way how to mass-import these text-replacements on iOS.
But if you own a macOS computer and you have it synced over iCloud with your mobile phone or tablet you will likely be able to open the text replacement pane in your system settings and import this plist file into it. Simply drag the file (after unzipping the ZIP file) into the text replacement window.
Download the Tokyo-Text-Replacement.zip file. Extract it (double clicking). And drag the .plist file into the Text Replacement Window.
For you to derive your own files you can find the raw data, a list of all designated Ken and Ward names in Tokyo here:
In Nodes you write programs by connecting “blocks” of code. Each node – as we refer to them – is a self contained piece of functionality like loading a file, rendering a 3D geometry or tracking the position of the mouse. The source code can be as big or as tiny as you like. We’ve seen some of ours ranging from 5 lines of code to the thousands. Conceptual/functional separation is usually more important.Nodes.io
*(not to be confused with node.js)
It’s been a year since Zenvent posted this:
A Hackintosh (a portmanteau of “Hack” and “Macintosh”), is a computer that runs macOS on a device not authorized by Apple, or one that no longer receives official software updates.https://en.wikipedia.org/wiki/Hackintosh
The 27th Day of the Season of Bureaucracy: The Day of the Sloth, Holy Day of Slothage. Kick back. Hang around. Grow Moss.Sloth-day
Diesmal geht es um:
- Scanner Pro auf iOS – https://apps.apple.com/us/app/scanner-pro/id333710667
- Scanbot auf iOS – https://scanbot.io/en/index.html
- Abo-Modelle bei Software und Diensten
- RING Kamera und Überwachungssystem – https://de-de.ring.com/
- Canary Indoor Camera – https://canary.is/
- Surveillance Station – https://www.synology.com/en-global/surveillance
- Ring has more than 400 police “partnerships” – https://arstechnica.com/tech-policy/2019/08/ring-has-more-than-400-police-partnerships-company-finally-says/
- Jumbo Privacy – https://blog.jumboprivacy.com/ – App Store: https://apps.apple.com/us/app/jumbo-privacy/id1454039975?ls=1
- Tim Berners-Lee Projekt “Solid”: https://solid.mit.edu/ – https://en.wikipedia.org/wiki/Solid_(web_decentralization_project) – https://solid.inrupt.com/how-it-works
- Ubuntu – https://ubuntu.com/
- Throw-Away Remote VNC Linux Desktop in a Docker container – https://www.schrankmonster.de/2019/08/27/a-throw-away-linux-desktop-in-a-container/
- Virtual Network Computing – https://en.wikipedia.org/wiki/Virtual_Network_Computing
- Stephen Wolfram – https://blog.stephenwolfram.com/
- Speed of Light in Medium – https://en.wikipedia.org/wiki/Speed_of_light
Kind of Bloop is a chiptune tribute to Miles Davis’ Kind of Blue, a track-by-track 8-bit reinterpretation of the bestselling jazz album of all time.
Launched as a Kickstarter project in April 2009, only two weeks after Kickstarter itself opened its doors, the album’s production was funded by 419 backers around the world.
Kind of Bloop was released on August 17, 2009, on the 50th anniversary of Kind of Blue.
Download at the link.
XamariNES is a cross-platform Nintendo Emulator using .Net Standard written in C#. This project started initially as a nighits/weekend project of mine to better understand the MOS 6502 processor in the original Nintendo Entertainment System. The CPU itself didn’t take long working on it a couple hours here and there. I decided once the CPU was completed, how hard could it be just to take it to next step and do the PPU? Here we are a year later and I finally think I have the PPU in a semi-working state.XamaiNES
If you ever wanted to start looking at and understand emulation this might be a starting point for you. With the high-level C# being used to describe and implement actual existing hardware – like the NES CPU:
The author does the full circle and everything you’d expect from a simple working emulator is there:
You might, or might not be aware of my passion for black clothing. I like the simplicity and absence of noise.
Anyway. You might not be aware of the wonderful world of black as-in paint.
Apparently the current record holder in blackness (measured in percent absorption of visible light) is a product called “Vanta Black”.
Vantablack is a material developed by Surrey NanoSystems in the United Kingdom and is one of the darkest substances known, absorbing up to 99.96% of visible light (at 663 nm if the light is perpendicular to the material).Wikipedia: Vantablack
The name is a compound of the acronym VANTA (vertically aligned carbon nanotube arrays) and the color black.
Unfortunately this blackest-of-black coating is not readily available for purchase. Export rules apply and so it’s usually not sold to civilians at all.
“What is the next best thing?”, you ask. Well it’s BLACK 2.0.
I am cycling for fun and for the effect it has on my body and well-being. I do about 30km of cycling every day on average.
After my first stationary trainer broke I bought a new one with the capability to measure wattage and also to apply resistance measured by the watt.
After looking at my average speeds, heart-rates and times on the device I was able to build a quite detailed understanding of the broader picture. What effects my power output and what does not. The effects of nutrition and health to what the body will deliver while being asked the exact same power output curve than the last time.
In a nutshell the numbers tell me that I am usually at a mediocre wattage of 150W constant load doing about 40 km/h average. My reserves usually allow me to go for 1-2 hours without a break doing this.
So far so good. Now I’ve found out from more serious cyclers that there’s something like “Functional Threshold Power“. I do regular have tests at the doctors to check for any heart-rate issues.
Reading about this Functional Threshold Power my curiousity is sparked.
How much could I do? Should I even go for measuring it?
I am running most of my in-house infrastructure based on Docker these days…
Docker is a set of platform-as-a-service (PaaS) products that use operating-system-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels.
All containers are run by a single operating-system kernel and are thus more lightweight than virtual machines.Wikipedia: Docker
And given the above definition it’s fairly easy to create and run containers of things like command-line tools and background servers/services. But due to the nature of Docker being “terminal only” by default it’s quite hard to do anything UI related.
But there is a way. By using the VNC protocol to get access to the graphical user interface we can set-up a container running a fully-fledge Linux Desktop and we can connect directly to this container.
I am using something I call “throw-away linux desktop containers” all day every day for various needs and uses. Everytime I start such a container this container is brand-new and ready to be used.
Actually when I start it the process looks like this:
As you can see when the container starts-up it asks for a password to be set. This is the password needed to be entered when the VNC client connects to the container.
And when you are connected, this is what you get:
I am sharing my scripts and Dockerfile with you so you can use it yourself. If you put a bit more time into it you can even customize it to your specific needs. At this point it’s based on Ubuntu 18.04 and starts-up a ubuntu-mate desktop environment in it’s default configuration.
When you log into the container it will log you in as root – but effectively you won’t be able to really screw around with the host machine as the container is still isolating you from the host. Nevertheless be aware that the container has some quirks and is run in extended privileges mode.
Chromium will be pre-installed as a browser but you will find that it won’t start up. That’s because Chromium won’t start up if you attempt a start as root user.
A lot means, a lot:
To have a chance to get on top of things and save space, try this:
npm i -g npkill
By then using npkill you will get an overview (after a looong scan) of how much disk space there is to be saved.
I am using 1Password for years now. It’s a great tool. So far.
As I am using it locally synced across my own infrastructure I feel like I am getting slowly but surely pushed out of their target-customer group. What does that mean?
The current pricing scheme, if you buy new, for 1Password looks like this:
So it’s always going to be a subscription if you want to start with it and if you want it in a straight line.
It used to be a one-time purchase per platform and you could set-up syncing across other cloud services as you saw fit. If you really start from scratch the 1Password apps still give you the option to create and sync locally but the direction is set and clear: they want you to sign up to a subscription.
I am not going to purchase a subscription. With some searching I found a software which is extremely similar to 1Password and fully featured. And is available as 1-time purchase per platform for all platforms I am using.
Also. This one is the first that could import my 1Password export files straight away without any issues. Even One-Time-Passwords (OTP) worked immediately.
The name is Enpass and it’s available for Mac, Windows, Linux, iOS, Android and basically acts as a step in replacement for 1Password. It directly imports what 1Password is exporting. And its pricing is:
Subscriptions for services as this are a no-go for me. It’s a commodity service which I am willing to pay for trailing updates and maintenance every year or so in a major update.
I am not willing to pay a substantial amount of money per user per month to just keep having access to my Passwords. And having them synced onto some companies infrastructure does not make this deal sweeter.
Enpass on the other hand comes with peace-of-mind that no data leaves your infrastructure and that you can get the data in and out any time.
It can import from these:
As mentioned I’ve migrated from 1Password in the mere of minutes and was able to plug-in-replace it immediately.
So I leave this right here:
It really does implement a lot of what an operating system UI and portions of the backends are supposed to be. It looks quite funky and there are applications to this. Of course it’s open source
I want all electron Apps to start existing there so I can call all of them with just a browser from anywhere.
If you want to give it a spin, click here:
Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, super-resolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs.Deep Image Prior Paper
Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity.
AI and deep-learning is not always necessary or helpful. In this case impressive results have been achieved without the use of any of the hyped technologies.
In this case you give the algorithms two inputs. A video base that you want to stylize and a base picture that resembles the style you want to achieve.
We introduce a new example-based approach to video stylization, with a focus on preserving the visual quality of the style, user controllability and applicability to arbitrary video. Our method gets as input one or more keyframes that the artist chooses to stylize with standard painting tools. It then automatically propagates the stylization to the rest of the sequence. To facilitate this while preserving visual quality, we developed a new type of guidance for state-of-art patch-based synthesis, that can be applied to any type of video content and does not require any additional information besides the video itself and a user-specified mask of the region to be stylized. We further show a temporal blending approach for interpolating style between keyframes that preserves texture coherence, contrast and high frequency details. We evaluate our method on various scenes from real production setting and provide a thorough comparison with prior art.Stylizing Video by Example Paper
Apparently there also is a Windows demo available in which you are supposedly be able to create your own stylized short clips. But as I wanted to try it out it threw a lot of funky messages regarding the application to be specifically untrustworthy / possibly malicious. So be aware and cautious.
Does not happen often. And was not happening for long. Actually I like those “service down”-messages from several websites. Anyone remembers the Fail Whale?
Maybe you want to give EasyEDA a try as it’s in-browser experience is better than anything I had come across so far. Granted I am not doing PCBs regularly but nevertheless – whenever I tried with the programs I’ve got recommended it wasn’t as straight forward as it is with this tool.