This application generates a random medieval city layout of a requested size. The generation method is rather arbitrary, the goal is to produce a nice looking map, not an accurate model of a city. Maybe in the future I’ll use its code as a basis for some game or maybe not.Medieval Fantasy City Generator
There’s a built-in design-mode in most modern browsers. Just switch on the developer tools / console and enable it:
document.designMode = 'on';
If you do not know japanese curry yet you are missing out big time.
Unfortunately due to typhoon 19 the Musashikosugi Curry Expo had been cancelled along the overall Kosugi Festival 2019.
But the curry stamp rally did start earlier than the typhoon hit the city and carries on still until end of october.
It works like this:
You go to each restaurant. You eat a meal. You get a stamp.
The more stamps you collect the higher valued the prices. More meal coupons even electronics!
But anyway. It’s all about japanese and indian curry. And for that
As you too might want to tick all 28 restaurants of your bucket list take this map I made of all 28:
As you can see – on the iPhone Google Maps app you even get a nice progress bar of those you already visited. I’ve been to parco curry already – so that counted.
A lot is going on in browsers these days. They are becoming increasingly powerful and resource-demanding.
So it just feels natural to combine high resource usage infrastructure with low resource using graphics to get the worst of both worlds.
Not quite, but you get the idea.
There’s a guy on the internet (haha) who dedicates time to write ASCII / character based graphics engines and games with it.
Of course, what’s that games and graphics?
And the more advanced Exhibit #2:
There’s the DJI drones that seemingly own the market at this point. Mostly used to take aerial images and movies. Your average YouTuber will probably have two or more of them.
Turns out that, if you add modern camera technology to these small flying objects and a lot of processing power you can do crazy things like indoor realtime 3D mapping…
Skydio is a vendor to look at when it somes to such interesting mapping applications.
Back in March 2019 we’d already seen artificial people. Yawn.
Back then a Generative adversarial network (GAN) was used to produce random human faces from scratch. It synthesized human faces out of randomness.
Now take it a step further and input actual small images. Like thumbnails or emojis or else.
And what do you get?
Quite impressive, eh? There’s more after the jump.
Oh and they wrote a paper about it: Progressive Face Super-Resolution via Attention to Facial Landmark
A month ago I wrote about a very black paint. This month brings me a papepr about an even blacker substance.
The synergistically incorporated CNT–metal hierarchical architectures offer record-high broadband optical absorption with excellent electrical and structural properties as well as industrial-scale producibility.Paper: Breakdown of Native Oxide Enables Multifunctional, Free-Form Carbon Nanotube–Metal Hierarchical Architectures
I’ve written on this topic before here. And as developers venture more into these generative algorithms it’s all that more fun to see even the intermediate results.
Oskar Stålberg writes about his little experiments and bigger libraries on Twitter. The above short demonstration was created by him.
Especially worth a look is the library he made available on GitHub: mxgmn/WaveFunctionCollapse.
Some more context, of questionable helpfulness:
In quantum mechanics, wave function collapse occurs when a wave function—initially in a superposition of several eigenstates—reduces to a single eigenstate due to interaction with the external world. This interaction is called an “observation”. It is the essence of a measurement in quantum mechanics which connects the wave function with classical observables like position and momentum. Collapse is one of two processes by which quantum systems evolve in time; the other is the continuous evolution via the Schrödinger equation. Collapse is a black box for a thermodynamically irreversible interaction with a classical environment. Calculations of quantum decoherence predict apparent wave function collapse when a superposition forms between the quantum system’s states and the environment’s states. Significantly, the combined wave function of the system and environment continue to obey the Schrödinger equation.Wikipedia: WFC
Right. Well. Told you. Here are some nice graphics of this applied to calm you:
If you own a modern age phone it’s very likely that it will store the photos you take in a wonderful format called HEIC – or “High Efficiency Image File Format (HEIF)”.
So HEIC does not quite fit yet. But you can make it fit with this on Linux.
Imagemagick and current GIMP installations apparently still don’t come pre-compiled with HEIF support. But you can install a tool to easily convert an HEIC image into a JPG file on the command line.
apt install libheif-examples
and then the tool heif-convert is your friend.
Augmented Reality needs proper 3D geometry and the ability to sense the environment to interact with it. At some point I would expect tools to show up that allow us to do some of this ourselves.
Seems like we’re one step closer. Ubiquity6 is reaching out to get early access to interested users:
We’re giving early access to our 3D mapping tools for creators and artists! If you’re interested in trying it out sign up for early access here: https://ubiquity6.typeform.com/to/bmpbkBUbiquity6 on Twitter
Of course. I applied. And I’ve just started testing.
AI and deep-learning is not always necessary or helpful. In this case impressive results have been achieved without the use of any of the hyped technologies.
In this case you give the algorithms two inputs. A video base that you want to stylize and a base picture that resembles the style you want to achieve.
We introduce a new example-based approach to video stylization, with a focus on preserving the visual quality of the style, user controllability and applicability to arbitrary video. Our method gets as input one or more keyframes that the artist chooses to stylize with standard painting tools. It then automatically propagates the stylization to the rest of the sequence. To facilitate this while preserving visual quality, we developed a new type of guidance for state-of-art patch-based synthesis, that can be applied to any type of video content and does not require any additional information besides the video itself and a user-specified mask of the region to be stylized. We further show a temporal blending approach for interpolating style between keyframes that preserves texture coherence, contrast and high frequency details. We evaluate our method on various scenes from real production setting and provide a thorough comparison with prior art.Stylizing Video by Example Paper
Apparently there also is a Windows demo available in which you are supposedly be able to create your own stylized short clips. But as I wanted to try it out it threw a lot of funky messages regarding the application to be specifically untrustworthy / possibly malicious. So be aware and cautious.
Does not happen often. And was not happening for long. Actually I like those “service down”-messages from several websites. Anyone remembers the Fail Whale?
The image [..] is a visual/artistic experiment playing with simultanous contrast resulting from other experiments these days. An over-saturated colored grid overlayed on a grayscale image causes the grayscale cells to be perceived as having color.original article
The processing needed to create the above image happened along with unrelated but significant code improvements In the last couple of weeks. I have been visiting mitch – a prolific GIMP contributors for collaboration – and lots of progress has been – and is still – being made on babl, GEGL and GIMP.