This application generates a random medieval city layout of a requested size. The generation method is rather arbitrary, the goal is to produce a nice looking map, not an accurate model of a city. Maybe in the future I’ll use its code as a basis for some game or maybe not.Medieval Fantasy City Generator
There’s a built-in design-mode in most modern browsers. Just switch on the developer tools / console and enable it:
document.designMode = 'on';
If you do not know japanese curry yet you are missing out big time.
Unfortunately due to typhoon 19 the Musashikosugi Curry Expo had been cancelled along the overall Kosugi Festival 2019.
But the curry stamp rally did start earlier than the typhoon hit the city and carries on still until end of october.
It works like this:
You go to each restaurant. You eat a meal. You get a stamp.
The more stamps you collect the higher valued the prices. More meal coupons even electronics!
But anyway. It’s all about japanese and indian curry. And for that
As you too might want to tick all 28 restaurants of your bucket list take this map I made of all 28:
As you can see – on the iPhone Google Maps app you even get a nice progress bar of those you already visited. I’ve been to parco curry already – so that counted.
A lot is going on in browsers these days. They are becoming increasingly powerful and resource-demanding.
So it just feels natural to combine high resource usage infrastructure with low resource using graphics to get the worst of both worlds.
Not quite, but you get the idea.
There’s a guy on the internet (haha) who dedicates time to write ASCII / character based graphics engines and games with it.
Of course, what’s that games and graphics?
And the more advanced Exhibit #2:
There’s the DJI drones that seemingly own the market at this point. Mostly used to take aerial images and movies. Your average YouTuber will probably have two or more of them.
Turns out that, if you add modern camera technology to these small flying objects and a lot of processing power you can do crazy things like indoor realtime 3D mapping…
Skydio is a vendor to look at when it somes to such interesting mapping applications.
Back in March 2019 we’d already seen artificial people. Yawn.
Back then a Generative adversarial network (GAN) was used to produce random human faces from scratch. It synthesized human faces out of randomness.
Now take it a step further and input actual small images. Like thumbnails or emojis or else.
And what do you get?
Quite impressive, eh? There’s more after the jump.
Oh and they wrote a paper about it: Progressive Face Super-Resolution via Attention to Facial Landmark
A month ago I wrote about a very black paint. This month brings me a papepr about an even blacker substance.
The synergistically incorporated CNT–metal hierarchical architectures offer record-high broadband optical absorption with excellent electrical and structural properties as well as industrial-scale producibility.Paper: Breakdown of Native Oxide Enables Multifunctional, Free-Form Carbon Nanotube–Metal Hierarchical Architectures
I’ve written on this topic before here. And as developers venture more into these generative algorithms it’s all that more fun to see even the intermediate results.
Oskar Stålberg writes about his little experiments and bigger libraries on Twitter. The above short demonstration was created by him.
Especially worth a look is the library he made available on GitHub: mxgmn/WaveFunctionCollapse.
Some more context, of questionable helpfulness:
In quantum mechanics, wave function collapse occurs when a wave function—initially in a superposition of several eigenstates—reduces to a single eigenstate due to interaction with the external world. This interaction is called an “observation”. It is the essence of a measurement in quantum mechanics which connects the wave function with classical observables like position and momentum. Collapse is one of two processes by which quantum systems evolve in time; the other is the continuous evolution via the Schrödinger equation. Collapse is a black box for a thermodynamically irreversible interaction with a classical environment. Calculations of quantum decoherence predict apparent wave function collapse when a superposition forms between the quantum system’s states and the environment’s states. Significantly, the combined wave function of the system and environment continue to obey the Schrödinger equation.Wikipedia: WFC
Right. Well. Told you. Here are some nice graphics of this applied to calm you:
If you own a modern age phone it’s very likely that it will store the photos you take in a wonderful format called HEIC – or “High Efficiency Image File Format (HEIF)”.
So HEIC does not quite fit yet. But you can make it fit with this on Linux.
Imagemagick and current GIMP installations apparently still don’t come pre-compiled with HEIF support. But you can install a tool to easily convert an HEIC image into a JPG file on the command line.
apt install libheif-examples
and then the tool heif-convert is your friend.
Augmented Reality needs proper 3D geometry and the ability to sense the environment to interact with it. At some point I would expect tools to show up that allow us to do some of this ourselves.
Seems like we’re one step closer. Ubiquity6 is reaching out to get early access to interested users:
We’re giving early access to our 3D mapping tools for creators and artists! If you’re interested in trying it out sign up for early access here: https://ubiquity6.typeform.com/to/bmpbkBUbiquity6 on Twitter
Of course. I applied. And I’ve just started testing.
AI and deep-learning is not always necessary or helpful. In this case impressive results have been achieved without the use of any of the hyped technologies.
In this case you give the algorithms two inputs. A video base that you want to stylize and a base picture that resembles the style you want to achieve.
We introduce a new example-based approach to video stylization, with a focus on preserving the visual quality of the style, user controllability and applicability to arbitrary video. Our method gets as input one or more keyframes that the artist chooses to stylize with standard painting tools. It then automatically propagates the stylization to the rest of the sequence. To facilitate this while preserving visual quality, we developed a new type of guidance for state-of-art patch-based synthesis, that can be applied to any type of video content and does not require any additional information besides the video itself and a user-specified mask of the region to be stylized. We further show a temporal blending approach for interpolating style between keyframes that preserves texture coherence, contrast and high frequency details. We evaluate our method on various scenes from real production setting and provide a thorough comparison with prior art.Stylizing Video by Example Paper
Apparently there also is a Windows demo available in which you are supposedly be able to create your own stylized short clips. But as I wanted to try it out it threw a lot of funky messages regarding the application to be specifically untrustworthy / possibly malicious. So be aware and cautious.
Does not happen often. And was not happening for long. Actually I like those “service down”-messages from several websites. Anyone remembers the Fail Whale?
The image [..] is a visual/artistic experiment playing with simultanous contrast resulting from other experiments these days. An over-saturated colored grid overlayed on a grayscale image causes the grayscale cells to be perceived as having color.original article
The processing needed to create the above image happened along with unrelated but significant code improvements In the last couple of weeks. I have been visiting mitch – a prolific GIMP contributors for collaboration – and lots of progress has been – and is still – being made on babl, GEGL and GIMP.
The firecracker exploded. Apparently after 2 weeks of usage of the Chuwi Hi10 Air the eMMC flash is malfunctioning.
In a totally strange way: Every byte on the eMMC can be read, seemingly. Even Windows 10 boots. But after a while it will hang and blue screen. Apparently because it tries to write to the eMMC and when those writes fail and pile up in the caches at some point the system calls it quits.
Anyhow: It means that no byte that is right now on this eMMC can be deleted / overwritten but only be read.
The great chinese support is really helpful and offered to replace the device free of charge right away. That’s very nice! But I came to the conclusion that I cannot send the device in, because:
It contains a full set of synched private data that I cannot remove by all means because the freaking soldered-on eMMC flash is broken.
The recipient of this broken tablet in china would be able to read all my data and I could not do anything about it.
Only an extremely small fraction of data is on there unencrypted. Only that much I hadn’t yet switched on encryption on during the initial set-up I was still doing on the device. And that little piece of data already is what won’t let me send out the device.
Now, what can we learn from this? We can learn: Never ever ever work with anything, even during set-up, without full encryption.
The nicest display I own :-)
A visualization of my son’s sleep pattern from birth to his first birthday. Crochet border surrounding a double knit body. Each row represents a single day. Each stitch represents 6 minutes of time spent awake or asleepSeung Lee on Twitter
No babies here. But I want such a blanket now.
If you ever want to quickly explain what augmented reality could be to a person not knowing yet, you might want to use this (and other) use cases for a visual explanation:
I achieved this by separating the artwork and text into many individual layers, that I placed in receding layers of 3D depth, in a 3D program on the computer. And made sure everything outside the borders of the book is excluded, to give it the ‘portal’ effect.Augmented Reality Book Cover by Alexander Wand
Think of this: You want to capture a whole, multi-scroll-pages, web-page into one image.
This can be difficult without the right tools. It surely will be a lot of work to accomplish a 10th of thousand pixel height screenshot put together from multiple single screenshots…
CutyCapt is there to help! It’s a command line tool encapsulating the very powerful WebKit browser engine to render a full page and then create a single file screenshot of the whole page for you.
By example, this is what it did when told to screenshot this website:
Addition: Of course the app I am using is called runkeeper yet I am mostly doing cycling. On average I am actually doing about 30km every day right before work starts. Here’s an additional statistic just for cycling:
In search of alternatives to the traditional centralized hosted social networks a lot of smart people have started to put time and thought into what is called “the-federation”.
The Federation refers to a global social network composed of nodes that talk to each other. Each of them is an installation of software which supports one of the federated social web protocols.What is The Federation?
PixelFed is a federated social image sharing platform, similar to instagram. Federation is done using the ActivityPub protocol, which is used by Mastodon, PeerTube, Pleroma, and more. Through ActivityPub PixelFed can share and interact with these platforms, as well as other instances of PixelFed.the-federation
I am posting this here as to my personal logbook.
Given that there’s already a Dockerfile I will give it a try as soon as possible.
The Chaos Communication Camp is an international, five-day open-air event for hackers and associated life-forms. It provides a relaxed atmosphere for free exchange of technical, social, and political ideas. The Camp has everything you need: power, internet, food and fun. Bring your tent and participate!CCCamp 2019 Wiki
It has been 2005 that I had the time and chance to attend an international open-air meeting of normal people. Of course I am talking about the 2005 What-the-hack I wrote about back then.
This year it’s time again for the Chaos Communication Camp in Germany. Sadly still I won’t be attending. Clearly that needs to change with one of the next iterations. With the CCC events becoming highly valuable also for families maybe it’s a chance in the future to meet up with old and valued friends (wink-wink Andreas Heil).
The Chaos Communication Camp (also known as CCCamp) is an international meeting of hackers that takes place every four years, organized by the Chaos Computer Club (CCC). So far all CCCamps have been held near Berlin, Germany.
The camp is an event for providing information about technical and societal issues, such as privacy, freedom of information and data security. Hosted speeches are held in big tents and conducted in English as well as German. Each participant may pitch a tent and connect to a fast internet connection and power.
Enjoy the intro-movie that has just been made available to us, alongside the whole design material:
When you walk around in Tokyo you will find that many buildings have red-triangle markings on some of the windows / panels on the outside.
I noticed them as well but I could not think of an explanation. Digging for information brought up this:
Panels to fire access openings shall be indicated with either a red or orange triangle of equal sides (minimum 150mm on each side), which can be upright or inverted, on the external side of the wall and with the wordings “Firefighting Access – Do Not Obstruct” of at least 25mm height on the internal side.Singapore Firefighting Guide 2018
The red triangles on the buildings/hotel windows in Japan are the rescue paths to be used in case of fire. All fire fighters know the meaning of this red triangle on the windows. Red in color makes it prominent, to be located easily by the fire fighters in case of a fire incident. During a fire incident, windows are generally broken to allow for smoke and other gases to come out of the building.Triangles in Japan
You can get a grasp at the beautiful side of science with visualizations and algorithms that output visual results.
This is the example of producing lots and lots of complex data (houses!) from a small set of input data. It is widely used in game development but also can be helpful to generate parameterized test and simulation environments for machine learning.
So before sending you over to the more detailed explanation the visual example:
This is a lot of different house images. Those are generated using a program called WaveFunctionCollapse:
WFC initializes output bitmap in a completely unobserved state, where each pixel value is in superposition of colors of the input bitmap (so if the input was black & white then the unobserved states are shown in different shades of grey). The coefficients in these superpositions are real numbers, not complex numbers, so it doesn’t do the actual quantum mechanics, but it was inspired by QM. Then the program goes into the observation-propagation cycle:
On each observation step an NxN region is chosen among the unobserved which has the lowest Shannon entropy. This region’s state then collapses into a definite state according to its coefficients and the distribution of NxN patterns in the input.
On each propagation step new information gained from the collapse on the previous step propagates through the output.
On each step the overall entropy decreases and in the end we have a completely observed state, the wave function has collapsed.
It may happen that during propagation all the coefficients for a certain pixel become zero. That means that the algorithm has run into a contradiction and can not continue. The problem of determining whether a certain bitmap allows other nontrivial bitmaps satisfying condition (C1) is NP-hard, so it’s impossible to create a fast solution that always finishes. In practice, however, the algorithm runs into contradictions surprisingly rarely.