artifact, noise removal and inpainting toolbox

Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, super-resolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs.
Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity.

Deep Image Prior Paper

As usual there’s source code available to try out yourself. It even comes with a pre-configured docker jupyter notebook.

stylizing video by example

AI and deep-learning is not always necessary or helpful. In this case impressive results have been achieved without the use of any of the hyped technologies.

In this case you give the algorithms two inputs. A video base that you want to stylize and a base picture that resembles the style you want to achieve.

We introduce a new example-based approach to video stylization, with a focus on preserving the visual quality of the style, user controllability and applicability to arbitrary video. Our method gets as input one or more keyframes that the artist chooses to stylize with standard painting tools. It then automatically propagates the stylization to the rest of the sequence. To facilitate this while preserving visual quality, we developed a new type of guidance for state-of-art patch-based synthesis, that can be applied to any type of video content and does not require any additional information besides the video itself and a user-specified mask of the region to be stylized. We further show a temporal blending approach for interpolating style between keyframes that preserves texture coherence, contrast and high frequency details. We evaluate our method on various scenes from real production setting and provide a thorough comparison with prior art.

Stylizing Video by Example Paper

Apparently there also is a Windows demo available in which you are supposedly be able to create your own stylized short clips. But as I wanted to try it out it threw a lot of funky messages regarding the application to be specifically untrustworthy / possibly malicious. So be aware and cautious.

code autocomplete with deep learning

When you are writing code the patterns seem to repeat every once in a while. Not only the patterns but also the occasion you are going to apply certain code styles and methods while developing.

To support a developer with this creative work the tedious and repetitious tasks of typing out what is thought can be supported by machine learning.

Chances are your favourite IDE already supports an somehow AI driven code autocomplete feature. And if it does not, read on as there are ways to integrate products like TabNine into any editor you can think of…

Visual Studio IntelliCode is a set of AI-assisted capabilities that improve developer productivity with features like contextual IntelliSense, argument completion, code formatting, and style rule inference.

IntelliCode augments existing developer workflows with machine-learning services that provide an understanding of code and its context. It’s applicable for C#, C++ (in preview), JavaScript/TypeScript (in preview), and XAML code today, and will be updated in the future to support more languages.

Visual Studio IntelliCode

Of course there are some new contenders to the scene, like TabNine:

TL;DR: TabNine is an autocompleter that helps you write code faster. We’re adding a deep learning model which significantly improves suggestion quality. You can see videos below and you can sign up for it here.

TabNine

Deep TabNine requires a lot of computing power: running the model on a laptop would not deliver the low latency that TabNine’s users have come to expect. So we are offering a service that will allow you to use TabNine’s servers for GPU-accelerated autocompletion. It’s called TabNine Cloud, …

TabNine

a wild Ma.K (Maschinen Krieger) appears

The franchise was started by Kow Yokoyama in the 1980’s. Yokoyama-san was a scratch-build modeller, artist and sculptor. Among his works he built machines of war that would fight in the 29th century, but took their visual cues from early 19th century weaponry and the early NASA space program. All his models were pieced together from numerous kits including armor, aircraft, cars and found objects (like ping pong balls).

Ma.K

Often described as WWII machinery in space, these kits are the perfect blend of sci-fi modelling with AFV weathering and finishing techniques…

There is also a longer article with high-resolution pictures here.

brain simulators

With recent announcements around human brain and brain-machine interface research like Neuralink the topic is seemingly seeing some more investments now.

As this whole topic is special to my heart I am interested in all things brain simulations. Thus here’s my personal “logbook entry” on the re-appearance of this topic:

This leads to one of the arguments for whole-brain simulation: it’ll help us solve the “biological imitation game,” a Turing test-like assay that pits digitally reconstructed brains against real ones. Iterations of the test help select increasingly more accurate models for a given task, which eventually become the most promising ideas for how specific biological networks operate. And because these models are based on mathematical equations, they could become the heart of next-generation AI.

Singularity Hub

There’s also a paper! – Unfortunately I cannot link directly to the paper as it is behind paywalls. Neuralink on the other side was so kind to publish open-access:

visualize baby sleep pattern

A visualization of my son’s sleep pattern from birth to his first birthday. Crochet border surrounding a double knit body. Each row represents a single day. Each stitch represents 6 minutes of time spent awake or asleep

Seung Lee on Twitter

No babies here. But I want such a blanket now.

The Fastest journey from Roma to Londinium..

…in July takes 27.1 days, covering 2967 kilometers. At least if you would have taken the challenge in times of the Roman Empire.

ORBIS: The Stanford Geospatial Network Model of the Roman World reconstructs the time cost and financial expense associated with a wide range of different types of travel in antiquity. The model is based on a simplified version of the giant network of cities, roads, rivers and sea lanes that framed movement across the Roman Empire. It broadly reflects conditions around 200 CE but also covers a few sites and roads created in late antiquity.

ORBIS: The Stanford Geospatial Network Model of the Roman World

By simulating movement along the principal routes of the Roman road network, the main navigable rivers, and hundreds of sea routes in the Mediterranean, Black Sea and coastal Atlantic, this interactive model reconstructs the duration and financial cost of travel in antiquity.

TAICHI: opensource computer graphics library

I am following the proceedings of ACM SIGGRAPH conferences for more than 20 years now and with the recent years development in computational capacity it seems that many more algorithms and ideas make it to an application near you.

Take this one contribution by Yuanming Hu for example – the Taichi open source computer graphics library:

Taichi is an open-source computer graphics library that aims to provide easy-to-use infrastructures for computer graphics R&D. It’s written in C++14 and wrapped friendly with Python.

Yuanming Hu has been working on the development of Taichi since his third undergrad year (2016), mainly in his spare time. He would like to thank Prof. Toshiya Hachisuka and Prof. Seiichi Koshizukafor making possible his internship at UTokyo, where the initial parts of Taichi were developed.

Taichi: About

Also interesting, trivia about the name:

augmented (reality) book cover

If you ever want to quickly explain what augmented reality could be to a person not knowing yet, you might want to use this (and other) use cases for a visual explanation:

I achieved this by separating the artwork and text into many individual layers, that I placed in receding layers of 3D depth, in a 3D program on the computer. And made sure everything outside the borders of the book is excluded, to give it the ‘portal’ effect.

Augmented Reality Book Cover by Alexander Wand

CCCamp 2019 – 21. – 25. August 2019

The Chaos Communication Camp is an international, five-day open-air event for hackers and associated life-forms. It provides a relaxed atmosphere for free exchange of technical, social, and political ideas. The Camp has everything you need: power, internet, food and fun. Bring your tent and participate!

CCCamp 2019 Wiki

It has been 2005 that I had the time and chance to attend an international open-air meeting of normal people. Of course I am talking about the 2005 What-the-hack I wrote about back then.

This year it’s time again for the Chaos Communication Camp in Germany. Sadly still I won’t be attending. Clearly that needs to change with one of the next iterations. With the CCC events becoming highly valuable also for families maybe it’s a chance in the future to meet up with old and valued friends (wink-wink Andreas Heil).

The Chaos Communication Camp (also known as CCCamp) is an international meeting of hackers that takes place every four years, organized by the Chaos Computer Club (CCC). So far all CCCamps have been held near Berlin, Germany.

The camp is an event for providing information about technical and societal issues, such as privacy, freedom of information and data security. Hosted speeches are held in big tents and conducted in English as well as German. Each participant may pitch a tent and connect to a fast internet connection and power.

CCCamp in Wikipedia

Enjoy the intro-movie that has just been made available to us, alongside the whole design material:

generate web-ready graphs and data visualizations without coding

Plot.ly is both a well known software library of impressive visualizations and a company providing software and know-how around visualizations.

With the libraries requiring a fair amount of know-how and programming to be useful to everyone there is now seemingly a multitude of tools that wraps the powers of the library and provides a great online-/web-browser experience to create impressive visualizations:

The access path to this quite powerful tool does somehow not be very easily found. So I am linking to it for your and my later reference.

retrofuturism sketches

On twitter and artstation I’ve came across Sheng Lam. An artist with a quite fascinating way of sketching. Take a look and be inspired:

Concept artist with AAA experience. Currently working on Star Citizen at Cloud Imperium Games. Mostly draws and likes to eat delicious food.

Sheng Lam on ArtStation

a red triangle on the window

When you walk around in Tokyo you will find that many buildings have red-triangle markings on some of the windows / panels on the outside.

some of the windows have red triangles pointing down
do you see the triangles pointing down on the upper right wall?

I noticed them as well but I could not think of an explanation. Digging for information brought up this:

Panels to fire access openings shall be indicated with either a red or orange triangle of equal sides (minimum 150mm on each side), which can be upright or inverted, on the external side of the wall and with the wordings “Firefighting Access – Do Not Obstruct” of at least 25mm height on the internal side.

Singapore Firefighting Guide 2018

The red triangles on the buildings/hotel windows in Japan are the rescue paths to be used in case of fire. All fire fighters know the meaning of this red triangle on the windows. Red in color makes it prominent, to be located easily by the fire fighters in case of a fire incident. During a fire incident, windows are generally broken to allow for smoke and other gases to come out of the building.

Triangles in Japan

making ICs at home

Try to wrap your head around this: There are people out there that take the term “Maker” to new levels. People Like Sam Zeloof. He went out and created his very own integrated circuit designs and then he built them. Like the actual silicon, the die, the bonded chip, the IC. The real thing.

Be inspired:

I am very excited to announce the details of my first integrated circuit and share the journey that this project has taken me on over the past year. I hope that my success will inspire others and help start a revolution in home chip fabrication. When I set out on this project I had no idea of what I had gotten myself into, but in the end I learned more than I ever thought I would about physics, chemistry, optics, electronics, and so many other fields. Furthermore, my efforts have only been matched with the most positive feedback and support from the world; I owe a sincere thanks to everyone who has helped me, given me advice, and inspired me on this project. Especially my amazing parents, who not only support and encourage me in any way they can but also give me a space to work in and put up with the electricity costs… Thank you!

Sam Zeloof

Bitmap & tilemap generation with the help of ideas from quantum mechanics

You can get a grasp at the beautiful side of science with visualizations and algorithms that output visual results.

This is the example of producing lots and lots of complex data (houses!) from a small set of input data. It is widely used in game development but also can be helpful to generate parameterized test and simulation environments for machine learning.

So before sending you over to the more detailed explanation the visual example:

This is a lot of different house images. Those are generated using a program called WaveFunctionCollapse:

WFC initializes output bitmap in a completely unobserved state, where each pixel value is in superposition of colors of the input bitmap (so if the input was black & white then the unobserved states are shown in different shades of grey). The coefficients in these superpositions are real numbers, not complex numbers, so it doesn’t do the actual quantum mechanics, but it was inspired by QM. Then the program goes into the observation-propagation cycle:

On each observation step an NxN region is chosen among the unobserved which has the lowest Shannon entropy. This region’s state then collapses into a definite state according to its coefficients and the distribution of NxN patterns in the input.

On each propagation step new information gained from the collapse on the previous step propagates through the output.

On each step the overall entropy decreases and in the end we have a completely observed state, the wave function has collapsed.

It may happen that during propagation all the coefficients for a certain pixel become zero. That means that the algorithm has run into a contradiction and can not continue. The problem of determining whether a certain bitmap allows other nontrivial bitmaps satisfying condition (C1) is NP-hard, so it’s impossible to create a fast solution that always finishes. In practice, however, the algorithm runs into contradictions surprisingly rarely.

Wave Function Collapse algorithm has been implemented in C++PythonKotlinRustJuliaGoHaxeJavaScript and adapted to Unity. You can download official executables from itch.io or run it in the browser. WFC generates levels in Bad NorthCaves of Qudseveral smaller games and many prototypes. It led to new research. For more related workexplanationsinteractive demosguidestutorials and examples see the ports, forks and spinoffs section.

online celebrities: Elon Musk

Seemingly short-message services are becoming the standard mode of communication for the powerful and rich. It seems that especially Twitter is capable of bringing the worst in those among us to the outside.

Of course the most controversial statements are being washed away by the sheer throughput. The next one always comes up quicker than you expect.

Helping the masses to keep track is a main task of journalism. That being said traditional journalism (as in newspapers, television) sees great difficulties to keep track as well. Too much, too quick.

So new forms of journalism develop. Often more tendentious then helpful for the cause so they require the cautious mind of the reader to add some more perspective.

This is the example of such a newly developing “tracking journalism” site around the dazzling public character that is Elon Musk. It is called “elonmusk.today“.

Editor’s note: others have done great work exposing Musk’s shameless charlatan carnival barking. If you enjoy this sort of thing, I highly recommend Niya White’s excellent article Musk Misses: The Stories You Don’t Hear About Tesla Anymore …

Tesla battery survey

If there is any discussion or argument about electric mobility these days the topic of range and battery-aging is coming up rather quick.

Every once in a while you also hear these awesome stories about electric cars achieving total-driven-distances outrageously huge compared to combustion engine cars…

But what is it then, how does a battery in an electric car age over time and mileage? Given that car manufacturers seem to settle on a ca. 150.000km total-driven-miles baseline for giving a battery-capacity percentage guarantee. Something like…

The future owners of ID. models won’t need to worry about the durability of their batteries either, as Volkswagen will guarantee that the batteries will retain at least 70 per cent of their usable capacity even after eight years or 160,000 kilometres.

Volkswagen Newsroom

or

Model S and Model X – 8 years (with the exception of the original 60 kWh battery manufactured before 2015, which is covered for a period of 8 years or 125,000 miles, whichever comes first).

Model 3 – 8 years or 100,000 miles, whichever comes first, with minimum 70% retention of Battery capacity over the warranty period.

Model 3 with Long-Range Battery – 8 years or 120,000 miles, whichever comes first, with minimum 70% retention of Battery capacity over the warranty period.

Tesla

So. Guarantees are one thing. Reality another. There’s an interesting user-driven survey set-up where Tesla owners can hand in their cars data thus participate in the survey.

And it yields results (getting updated as you read…):

In a nutshell: It seems there is a good chance that your Tesla car might have an above 90% original-specified-battery-capacity after the guaranteed 100.000 miles and even after 150.000 miles (241.000km)…

Good news that is! Given that the average household will do about or less than 20.000 km/year it would mean over 12 years of use and the car still would hold 90% of battery charge. The battery being the most expensive single component on an electric car this is extremely good news as it’s unlikely that the battery will be the reason for the car to be scraped after this mileage.

I/O Is Faster Than the CPU

I/O is getting faster in servers that have fast programmable NICs and non-volatile main memory operating close to the speed of DRAM, but single-threaded CPU speeds have stagnated. Applications cannot take advantage of modern hardware capabilities when using interfaces built around abstractions that assume I/O to be slow. We therefore propose a structure for an OS called parakernel, which eliminates most OS abstractions and provides interfaces for applications to leverage the full potential of the underlying hardware. The parakernel facilitates application-level parallelism by securely partitioning the resources and multiplexing only those resources that are not partitioned.

https://dl.acm.org/citation.cfm?doid=3317550.3321426

Tilt your head, be more aggressive

“We show that tilting one’s head downward systematically changes the way the face is perceived, such that a neutral face—a face with no muscle movement or facial expression—appears to be more dominant when the head is tilted down,” explain researchers Zachary Witkower and Jessica Tracy of the University of British Columbia. 

A Facial-Action Imposter

Faux Joe Rogan

Our deep learning engineers at Dessa built a model to replicate Joe Rogan’s voice to showcase current AI techniques. To understand how we developed the technology and to discuss the ethical implications of this work, read our announcement article.

fakejoerogan.com

They’ve got 8 example sound snippets and it’s up to you to identify the fake or the real file:

look at the earth

Watch sunlight and weather patterns move across Earth throughout the day, and bask in the glory of our blue marble in real time.

Every 20 minutes (or every hour, you pick), Downlink updates your desktop background with the newest images of Earth.

Choose from 8 different views of Earth, including stunning full disk images from 3 different geostationary satellites.

Downlink is a macOS application which downloads the most current image taken by earth orbitting satellites. So what you see is as accurately depicting reality as technically possible right now.

Interestingly all this data comes from public domain sources. And the makers of the app have documented their sources:

Thanks to NASA, NOAA, JMA, Lockheed Martin, Harris Corporation, ULA, MHI, and everyone else who designed, built, launched, and operate GOES-16, GOES-17, and Himawari-8.
If you’d like to use the sources Downlink is built on top of, here is the file pointing to those resources. Use it, and tell me what you’ve built!

I am using macOS as well as Linux as my main desktop operating systems. My macs are set with Downlink. But what about my Linux machines?

Easy! The file referring to the sources looks like this:

With some simple steps on Linux we can grab the URL we want, download the image and update the desktop background.

Step 1: Getting the URL from this JSON.

Install jq, curl and xargs and in your shell run this:

curl -s http://downlinkapp.com/sources.json | jq -r '.sources[2].url.full' 

This will give you the URL “full” of the 2nd source in the list (Hiwamari-8).

Step 2: Download the image.

curl -s http://downlinkapp.com/sources.json | jq -r '.sources[2].url.full' | xargs -I{} curl -s -o background.jpg {}

By simply adding the xargs and curl call it will take the URL output from the command in front as input to the second curl call and download it. It also will store the file as “background.jpg”.

Step 3: Setting as background image.

Setting the background image of your desktop depends on what desktop software you are using on Linux. Depending on that you will need to look up in the manual how to set the background image from the command line.

But for most use cases there’s a tool that helps. It’s called Variety.

Variety is an open-source wallpaper changer for Linux
Variety is packed with great features, yet slim and easy to use. It can use local images or automatically download wallpapers from Unsplash and other online sources, allows you to rotate them on a regular interval, and provides easy ways to separate the great images from the junk. Variety can also display wise and funny quotations or a nice digital clock on the desktop.

Earth, Moon, Spaceship.

As you might know humans are going to go “back to the moon“. At least when you ask a certain eCommerce mogul.

We have been there already. 40+ years ago. And since then we have unlearnt how to do it.

Just to give an impression of what we as humans where capable to achieve 40 years ago:

Moon (below), Apollo 16 Command Module (left), Earth (right)

There’s a whole collection of these photos freely available for further digging. Go there and spend a day!

Imagine: Humans flew there and back. With 60s tech. What can we do now when we want it?

Of course. The important bit is not really flying there and back. The most important thing for me is perspective.

Look at that small blue-white marble. We should reset our perspectives regularly.

let AI convert videos to comic strips for you

Artificial Intelligence is used more and more to achieve tasks only humans could do before. Especially in the areas that need a certain technique to be mastered AI goes above and beyond what humans would be able to do.

In this case a team has implemented something that takes video inputs and generates a comic strip from this input. Imagine it to look like this:

Input
Output

In this paper, we propose a solution to transform a video into a comics. We approach this task using a
neural style algorithm based on Generative Adversarial Networks (GANs).

Paper
click to read the paper

They even made a nice website you can try it yourself with any YouTube Video you want:

Video Game History

Have you ever asked yourself what those generations coming after us will know about what was part of our culture when we grew up? As much as computers are a part of my story a bit of gaming also is.

From games on tape to games on floppy disks to CDs to no-media game streaming it has been quite a couple of decades. And with the demise of physical media access to the actual games will become harder for those games never delivered outside of online platforms. Those platforms will die. None of them will remain forever.

Hardware platforms follow the same logic: Today it’s the new hype. Tomorrow the software from yesterday won’t be supported by hardware and/or operating systems. Everything is in constant flux.

Emulation is a great tool for many use-cases. But it probably won’t solve all challenges. Preserving access to software and the knowledge around the required dependencies is the mission of the Video Game History Foundation.

Video game preservation matters because video games matter. Games are deeply ingrained in our culture, and they’re here to stay. They generated an unprecedented $91 billion dollars in revenue in 2016. They’re being collected by the Smithsonian, the Museum of Modern Art, and the Library of Congress. They’ve inspired dozens of feature films and even more books. They’re used as a medium of personal expression, as the means for raising money for charity, as educational tools, and in therapy.
And yet, despite all this, video game history is disappearing. The majority of games that have been created throughout history are no longer easily accessible to study and play. And even when we can play games, that playable code is only a part of the story.

Video Game History Foundation

Digital Daily Routine as an Experiment – “Digitaler Alltag als Experiment”

Last week we were approached by Prof. Dr. Nicole Zillien from Justus-Liebig-University in Gießen/Germany. She explained to us that she currently is working on a book.

In this book an empirical analysis is carried out on “quantified-self” approaches to real life problems.

With the lot of information and data we had posted on our personal website(s) like this blog and the “loosing weight” webpage apparently we qualified for being mentioned. We were asked if it would be okay to be named in the book or if we wanted to be pseudonymized.

Since everything we have posted online and which is publicly accessible right now can and should be quoted we were happy to give a go-ahead. We’re publishing things because we want it to spur further thoughts.

It will be out at the end of 2019 / beginning of 2020. As soon as it is out we hope to have a review copy to talk about it in this blog once again.

We do not know what exactly is being written and linked to us – we might as well end up as the worst example of all time. But well, then there’s something to learn in that as well.

Digital assistant language teacher

Since a couple of months we are trying harder to learn a foreign language.

And as we excepted it is very hard to get a proper grasp on speaking the language. Especially since it is a very different language to our mother tongue.

And while comfortably interacting with digital assistants around the house every day in english and german the thought came up: why don’t these digital assistants help with foreign language listening and speaking training?

I mean Google Assistant answers questions in the language you have asked them. Siri and Alexa need to know upfront in which language you are going to ask questions. But at least Alexa can translate between languages…

But with all seriousness: Why do we not already have the obvious killer feature delivered? Everyone could already have a personal language training partner…

a virtual network inside your machine

Did you ever start a horde of virtual machines and a complicated vm-only network set-up just to simulate a medium complex network and the interaction of nodes in that network? Well that’s a tiresome, error-prone and labour intensive process. Fear no more, there’s a tool to the rescue.

“Mininet creates a realistic virtual network, running real kernel, switch and application code, on a single machine (VM, cloud or native), in seconds, with a single command:”

frontpage_diagram

“Because you can easily interact with your network using the Mininet CLI (and API), customize it, share it with others, or deploy it on real hardware, Mininet is useful for development, teaching, and research. Mininet is also a great way to develop, share, and experiment with OpenFlow and Software-Defined Networking systems.

Mininet is actively developed and supported, and is released under a permissive BSD Open Source license. We encourage contribution of code, bug reports/fixes, documentation, and anything else that can improve the system!”

Source: http://mininet.github.com/

Security Engineering — The Book

The second edition of the book “Security Engineering” by Ross Anderson is available as a full download. It’s quite a reference and a must-read for anybody with an interest in security (which for example all developers should have).

“When I wrote the first edition, we put the chapters online free after four years and found that this boosted sales of the paper edition. People would find a useful chapter online and then buy the book to have it as a reference. Wiley and I agreed to do the same with the second edition, and now, four years after publication, I am putting all the chapters online for free. Enjoy them – and I hope you’ll buy the paper version to have as a conveient shelf reference.”
book2coversmall

Source 1: http://www.cl.cam.ac.uk/~rja14/book.html

know your numbers!

Wikipedia describes latency this way:

“Latency is a measure of time delay experienced in a system, the precise definition of which depends on the system and the time being measured. In communications, the lower limit of latency is determined by the medium being used for communications. In reliable two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is “in-flight” at any one moment. In the field of human-machine interaction, perceptible latency has a strong effect on user satisfaction and usability.” (Wikipedia)

Given that it’s quite important for any developer to know his numbers. Since latency has a huge impact on how software should be architected it’s important to keep that in mind:

 

Bildschirmfoto 2012-12-25 um 21.28.20

 

Source: http://www.eecs.berkeley.edu/~rcs/research/interactive_latency.html