For over 2 years now my preferred local supermarket offers an electronic receipt for each purchase.
With a „little bit of data sharing“ signed-off just about a couple of seconds after you paid your purchase a PDF file of the receipt you would have gotten at the cashier is in your eMail inbox.
That receipt is a fairly small PDF file looking just like the paper-receipt would have looked like. Additionally you can opt out of the paper receipt – which means less wasted paper as it‘s not even printed out at the cashir.
I had accumulated two years of groceries shopping – over 100 receipts until I finally sat down and coded a parser that takes the pdf-receipt, parses it and publishes the results to respective MQTT topics.
When you start the program this way it’ll go through all pdf files in the directory you point it at. If it finds REWE eBons it will read and parse them in.
It will then order the eBons by date and output all of them in the correct timely order to MQTT.
Then it will start watching the directory for any changes and new files. It’ll pick up those files automatically, read them in and send the data to MQTT of the receipt date is newer than the last one seen and sent.
I‘ve wrapped all of it so it would come with a Dockerfile and can be run anywhere where you‘ve got docker up and running.
Now what do I do with this you may ask?
Let me show you an example:
How I use this is: the tool is running all the time and watching a directory. Whenever a new .PDF file shows up in this directory it will automatically be parsed and it‘s contents pushed out through MQTT. Each item essentially in it‘s own separate topic with price, quantities etc.
Using a combination of Telegraf (to get the data from MQTT to InfluxDB) and InfluxDB (to store time-series) and Grafana (to query and show graphs).
This way it‘s trivial to plot the price development of groceries you regularly purchase. It‘s easy to see what you might have missed.
I am still drilling into the data and there‘s lots of things you can do with it.
Ever since I stumbled across several IRL streamers I was intrigued by the concept of it.
IRL or “in-real-life” is essentially the art of streaming everyday life. For hours and totally mobile. Of course there are some great gems in the vast sea of content creators. One of them – robcdee – streams for hours live almost every day and shows you his way around in Japan.
Apart from the content – Japan is great – the technical side of this IRL streaming set-ups is quite interesting. Imagine: These streamers wander around with usually a backpack filled with batteries, several modems (4G/5G…) that load balance and bundle a 2-6 Mbit/s video+audio stream that gets sent to a central server either through SRT or RTMP protocol. This central server runs OBS Studio and receives the video stream offering the ability to add overlays and even switch between different scenes and contents.
After I had a basic understanding of the underlying technologies I went ahead and started building my own set-up. I do have plenty of machines with enough internet bandwidth available so they could be the host machine of OBS Studio. I wanted all of this live in a nice docker container.
I went ahead and built a docker container that is based upon the latest Ubuntu 21.04 image and basically sets up a very minimal desktop environment accessible over VNC. In this environment there is OBS Studio running and waiting for the live stream to arrive to then send out to Twitch or YouTube.
How I have set-up this docker desktop environment exactly will be part of another blog article.
So far so good. OBS offers the ability to define multiple scenes to switch between during a live stream.
These IRL streamers usually have one scene for when they are starting their stream and two more scenes for when they are having a solid connection from their camera/mobile setup and when they are currently experiencing connection issues.
All of the streamers seemingly use the same tooling when it comes to automatically switch between the different scenes depending on their connectivity state. This tool unfortunately is only available for Windows – not for Linux or macOS.
So I thought I give it a shot and write a platform independent one. Nothing wrong with understanding a bit more about the technicalities of live streaming, right?
It runs on Linux, Windows, macOS as I have used the .NET framework 6.0 to create it. It is all open source and essentially just a bit of glue and logic around another open source tool called “netproxy” and OBS WebSocket.net.
My tool basically runs on all sorts of platforms – including Linux, Windows and macOS. I run it inside the docker container with the OBS Studio. It essentially proxies all data to OBS and monitors wether or not the connection is established or currently disconnected. Furthermore it can be configured to switch scenes in OBS. So depending on wether there is a working connection or not it will switch between a “connected” and “disconnected” scene all automatically.
So when you are out and about live streaming your day this little tool takes care of controlling OBS Studio for you.
Heute ist der Tag an dem die FeM e.V. ihren 25. Geburtstag feiert.
Die Forschungsgemeinschaft elektronische Medien e. V. (FeM) ist einer der größten studentischen Vereine an einer Hochschule in Thüringen. Gegründet wurde der Verein im Jahr 1997 im Umfeld der TU Ilmenau. Er umfasst derzeit circa 2.000 Mitglieder und betreibt das größte selbstverwaltete studentische Netzwerk Thüringens. Über verschiedene Streamingprojekte erreichte der Verein auch außerhalb Thüringens Bekanntheit.
Gleich zu Anfang habe ich mir meinen Stream so eingerichtet dass jeweils der aktuelle Spieler-Counter immer im Bild zu sehen war. Ich finde das einfach eine ganze witzige Information vor allem für LuckyV-Interessierte.
Meine ursprüngliche Implementierung war etwas kompliziert – zu kompliziert um sie einfach mit anderen zu teilen.
Daher habe ich mich entschlossen den Zähler in eine eigene Windows Applikation zu verpacken die von Streamern einfach verwendet und in OBS eingebunden werden kann.
Starten und prüfen ob die Zahl auch angezeigt wird – es sollte ungefähr so aussehen:
Man kann das nun auf zwei Wegen einbinden.
Weg 1: Fensteraufnahme
In der Applikation kann man Hintergrundfarben sowie Schriftart und Farbe konfigurieren. Wenn man das erledigt hat wie man es haben will wählt man im Quellenmenü “Fensteraufnahme” und dann das Applikationsfenster.
Diese Quelle kann man dann wie man möchte konfigurieren. z.B. mit Filtern um bis auf die Schrift alles transparent zu gestalten oder oder oder…
Weg 2: playercount.txt
Wenn die Applikation läuft aktualisiert sie ständig eine Datei “playercount.txt” im gleichen Ordner. Man kann nun OBS so konfigurieren dass diese Datei regelmässig ausgelesen und angezeigt wird.
Dazu fügt man ein “Text (GDI+)” im Quellenmenü hinzu und konfiguriert diese Quelle so dass der Text aus einer Datei gelesen wird:
Hier kann man dann auch beliebig Schriftart, Größe und Farbe konfigurieren.
With the release of the M1 iPad Pro I had decided to order one to replace my aging iPad mini 4 from 2015.
And so far I am very happy with it. I‘ve got it with the Apple Smart Folio which comes with this strange material that seems to collect dust like crazy. At least it seems to do it‘s job to protect the device.
The Smart Folio allows you to prop up the iPad either fully up or lay it down onto a table at an angle,
Embedded into the smart folio are magnets on both sides. It even depends solely on magnetic force to attach to the iPad Pro.
Now I‘am regularly typing and using a mouse with iPad OS. Which works great with the normal bluetooth keyboard and mouse from Logitech that I had around still. But those desktop peripherals are quite heavy and big devices compared to what you would want with such a mobile device like the iPad Pro.
There are multiple options that replace the Smart Folio with keyboard and touchpad combos. From Apple, from Logitech and of course the usual suspects from China. Those combos all have different downsides for me. For example:
Apple Magic Keyboard
Logitech Folio Touch
does not attach magnetically but puts the iPad into a bumper frame
As thick as it gets
Kind-of pricey for keyboard and touchpad
Both of the above options require the iPad Pro to be always connected to the case/keyboard. This limits the angles and the distance I can put the iPad to use it. It limits how I can use the keyboard and in what positions I can type. Both of them also connect directly to the iPad Pro through the back-connectors to be powered and data-transfer.
None of this is a good thing for me. I want a keyboard+touchpad that I can basically put at whatever distance I want in front of the screen and put the screen anywhere I need to be able to work comfortably. Everything being too tightly integrated and requiring to be always wired up to even work is a big downside for me.
So I started to look around and quickly found lots of options of keyboard/touchpad combinations that are self-powered and actually also already available for years.
With some research I settled to purchase one that ticked all boxes for me:
The haptic feeling when typing should be bearable, ideally it should be like a good notebook keyboard
The touchpad should support multi-touch gestures and work well with iPad OS – that is a really hard thing to achieve – it seems
Bluetooth 5.0 connection that does not interfere with WiFi
very light, yet has to have enough battery for hours of use
Needs to attach somehow to the iPad case while not in use yet needs to be detached physically from the iPad while in use
Needs to support all normal keys you would need on Linux console or while programming, including the F-keys.
This is how it looks like while in use:
As you can see it‘s not actually attached to the iPad but just there ready to be used.It‘s a fair size – remember: this is an 12.9 inch iPad next to it.
All the above checkboxes are ticked as the the keyboard feels well while typing. It has F-keys and even offers switchable layouts for different use-cases. All my programming and console needs are fulfilled.
It’s insanely light – feels almost too light. But the backside is thin metal which is magnetic. And yes. It just attaches to the outside of the original Apple Smart Folio that I already had. It literally just snaps onto it and stays there while being moved from one place to the other.
With the flexibility of the original Smart Folio I can now put the iPad onto the couch table and sit comfortably on the couch while typing and using the touchpad with the stable small keyboard on my lap.
Since it comes with it‘s own battery (I have it for 1 week and I was unable to empty it) its a bonus that charging takes place through a USB-C port. Most other cheap keyboard/touchpad combinations come with a Micro-USB port for charging. Even in 2021.
I could not resist to open it right up. There are 8 screws at the bottom that can easily be opened.
Look how easy it will be to replace the battery one day. This is a basic off-the-shelve battery pack that is cheap to replace when faulty.
Now while I can recommend this keyboard for the iPad Pro I cannot tell you where you can get it. I‘ve ordered mine on Amazon but while I was writing this article I was unable to find and link the product page. It got removed apparently.
So my only recommendation would be: Go for a hunt for keyboards with similar options. Mine also has key backlights with different colors – which nobody needs for any reason. But if you go for the hunt. Look out for keyboard touchpad combinations that offer Bluetooth 5.0 and USB-C for charging. Compare the pictures as the keyboard layout was quite unique (T-cursor keys, F-keys,…) .
The only meter in our house that I was not yet able to read out automatically was the water meter.
With the help of a great open source project by the name of AI-on-the-edge and an ESP32-Camera Module it is quite simple to regularly take a picture of the meter, convert it into a digital read-out and send it away through MQTT.
The process is quite simple and straightforward.
Flash the ready made Firmware image to the module
Configure the WiFi using a SD card
Put the module directly over the meter
Connect to it and setup the reference points and the meter recognition marks
As you can see above all the recognition is done on the ESP32 module with its 4MByte of RAM.
With the data sent through MQTT it’s easy to draw nice graphs:
It took me a couple of months to get all the required parts ordered and delivered. Many small envelopes with parts that seemlingly are only produced by a handful of manufacturers. But anyways: After everything had arrived and was checked for completeness my wife took the hardware parts into her hands and started soldering and assembling the keyboard.
And so this project naturally is split up between my wife and me in the most natural (to us) way: My wife did all the hardware parts – whilst I did the software and interfacing portion. (Admittedly there only was to be figured out how to get the firmware compiled and altered to my specific needs)
Conveniently QMK comes with it’s own build tools – so you will be up and running in no time. Since I had purchased Arduino ProMicro controllers I was good with the most basic setup you can imagine. As the base requirements for the toolchain where minimal I went with the machine that I had in front of me – a Raspberry Pi 4 with the standard Raspberry Pi OS.
These where the steps to get going:
get Python 3 and the qmk tool installed – I’ve chosen not to use the tool setup procedure but instead go with a separate clone of the QMK firmware repository.
python3 -m pip install --user qmk
clone the QMK firmware repository and get the QMK tool running (in the /bin folder of the firmware repository – it’s actually just a python script)
git clone https://github.com/qmk/qmk_firmware.git
git submodule sync --recursive
git submodule update --init --recursive --progress
create your own keymap to work with. You gotta use the crkbd firmware options as a default for this keyboard. The command below will generate a subfolder with the name of your keymap in the keyboards/crkbd/keymaps folder with the default settings of the crkbd keyboard firmware.
qmk new-keymap -kb crkbd
build your first firmware by running the command below (note: btk-corne is the name of my keymap)
now you can flash the firmware to both ProMicro controllers. The most straight forward way for me was using avrdude on the commandline. In my case the device is added as /dev/ttyACM0 and the compiled firmware named crkbd_rev1_legacy_btk-corne.hex.
When you got all this information you need to plug in the ProMicro and trigger a reset by bridging Ground and the Reset Pin. If you added, like we did, a button for reset you can use this. After hitting reset the ProMicro bootloader will enter the state where it’s possible to be flashed. Reset it and THEN run the avrdude commandline.
(alternatively) you can also use QMK Toolbox to flash the firmware. Also works.
So now you know how to get the firmware compiled and running (if not, look here further). But most probably you are not happy with some aspects of your keymap or firmware.
By now you might ask yourself: Hey, I’ve got two ProMicros on one keyboard. Both are flashed with the same firmware. Into which of the two do I plug in the USB cable that then is plugged into the computer?
The answer is: by default QMK assumes that you are plugging into the left half of the keyboard making the left half the master. If you prefer to use the right half you can change this behaviour in the config.h file in the firmware:
You have to plug in both of them anyway at times when you want to flash a new firmware to them as you adjust and make changes to your keymap.
Thankfully QMK comes with loads of options and even a very useful configurator tool. I used this tool to adjust the keymap to my requirements. The process there is straightforward again. Open up the configurator and select the correct keyboard type. In my case that is crkbd/legacy. The basic difference between legacy and common is a different communication protocol between the two halves. This really only is important when features are used that require some sort of sync between the two haves – like some RGB LED effects. Since I did not add any LEDs to the build I go with legacy for now. Maybe I need some features later that require me to go with common.
The configurator allows you to set up the whole keymap and upload/download it as a .json file.
That .json file can easily be converted into the C code that you need to alter in the actual keymap.c file. Assuming that the .json file you got is named btk-corne.json the full commandline is:
qmk json2c btk-corne.json
Then simply take this output and replace the stuff in the keymap.c with it:
Now you compile and flash again. And if all went right you’ve got the new keymap and firmware on your keyboard and it’ll work just like that :)
Disclaimer: I’ve joined for fun and not for profit – this is a new hobby.
For about a year now I was regularly watching some Twitch streamers go along their business and it spawned my curiousity when some of them started to do something they called “GTA V roleplay”.
Grand Theft Auto V (GTA V) is a 2013 action-adventure game developed by Rockstar North and published by Rockstar Games. Set within the fictional state of San Andreas, based on Southern California, the open world design lets players freely roam San Andreas’ open countryside and the fictional city of Los Santos, based on Los Angeles. The game is played from either a third-person or first-person perspective, and its world is navigated on foot and by vehicle.
So these streamers where mostly using an alternative client application to log into GTA V online servers that where operated by independent teams to play the roles of characters they created themselves. It started to really get interesting when there is dynamics and interactions happening between those characters and whole stories unfold over the course of days and weeks.
It’s great fun watching and having the opportunity to sometimes see multiple perspectives (by multiple streamers) of the same story and eventually even to be able to interact with the streamers communities.
One such fairly big german server is LuckyV. It’s an alternative GTA V hardcore role-play server creates by players for players.
The hardcore here means: the characters are supposed to act as much as possible like they would in the encountered situations in real life.
So in order to play on this server you have to create a character and the characters background story. You gotta really play that character when on the server.
When you play it’s not just a vanilla GTA V experience. There are lots of features that are specific to the server you are playing on. Some examples are:
Communication: you are communicating with people in your vicinity directly – you can hear them if they are close enough to be heard and you can be heard when you are close to people
Jobs: there’s lots to be done. Become CEO of your own company and manage it!
Social Interaction: there’s probably an event just around the next corner happening. You are able to meet people. Crowds of people even. Remember: There are usually no non-players. Every person you see it a real human who you can interact with.
The LuckyV community made a great overview page where you can watch other people playing and live streaming their journey. It’s extensive – over 200 streamers are online regularly and the screenshot below shows a mid-week day right after lunch…
Anyhow. This is all great and fun but plot twist: I do not play it. (yet)
So what do I have to do with it except I am watching Streamers? Easy: Behind the game there’s code. Lots of code actually.
In a nutshell there’s a custom-GTA V server implementation that talks to a custom GTA V client. LuckyV is using the altV server and client to expand the functionalities and bring the players into the world.
It allows for 1000 simultaneous players in the same world at a time. So there could be 1000 people right there with you. Actually since LuckyV is about to have it’s first birthday the regular player numbers are peaking at around 450 simultaneous players in Los Santos at a time.
The whole set-up consists of several services all put together:
web pages for game overlays, in-game UI and administration tools (PHP)
a SQL database that holds the item, character etc. data
a pub/sub style message hub that enables communication between in-game UI, webpages and the gamemode
a TeamSpeak 3 server that allows players to join a common channel (essentially one teamspeak room) and a plug-in called SaltyChat that mutes/unmutes players in the vicinity and allows features like in-game mobile phone etc.
everything of the above is in containers and easily deployable anywhere you got enough hardware to run it – when there are 100s of players online the load of the machine grows almost linear – and the machine is doing it’s moneys worth then…
So after the team announced some vacancies through those streamers I watched I contact them and asked if I could help out.
And that’s how I got there working on both the gamemode code as well as helping the infrastructure become more stable and resilient.
For my first real contribution to the gamemode I was asked to implement secondary keys for vehicles as well as apartments/houses.
Up until now only the owner / tenant of the vehicle or apartment had access to it. Since this game is about social interactions it would be a good addition of that owner could hand out additional keys to those they love / interact with.
And that I did. I worked my way through the existing code base – which is a “grown codebase” – and after about 3 days of work it worked!
Most impressive for me is the team and the people I’ve met there. This current team welcomed me warmly and helped me to wrap my head around the patterns in the code. Given the enthusiast / hobby character this has it’s almost frightening how professional and nice everything works out. I mean, we developers had a demo-session with the game design team to show off what our feature does, how it works and to let them try it out to see if it’s like the envisioned it.
They even did a trailer for the feature I worked on! And it is as cheesy as I could only wished:
So far so good: It’s great fun and really rewarding working with all these nice people to bring even more fun and joy to players. Seeing the player numbers grow. Seeing streamers actually use the features and play with it – handing over keys to their partner. Really rewarding.
For the first time in the last 10ish years I am back playing a game that really impresses me. The story, the world and the technology of Cyberpunk 2077 really is a step forward.
It’s a first in many aspects for me. I do not own a PC capable enough of playing Cyberpunk 2077 at any quality level. Usually I am playing games on consoles like the Playstation. But for this one I have selected to play on the PC platform. But how?
I am using game streaming. The game is rendered in a datacenter on a PC and graphics card I am renting for the purpose of playing the game. And it simply works great!
So I am playing a next-generation open-world game with technical break-throughs like Raytracing used to produce really great graphics streamed over the internet to my big-screen TV and my keyboard+mouse forwarded to that datacenter without (for me) noticeable lag or quality issues.
The only downside I can see so far is that sooo many people like to play it this way that there are not enough machines (gaming-rigs) available to all the players that want – so there’s a queue in the evening.
But I am doing what I am always doing when I play games. I take screenshots. And if the graphics are great I am even trying to make panoramic views of the in-game graphics. Remember my GTA V and BioShock Infinite pictures?
So here is the first batch of pictures – some stitched together using 16 and more single screenshots. Look at the detail! Again – there are in-game screenshots. Click on them to make them bigger – and right-click open the source to really zoom into them.
Diesmal gehen wir der Frage nach, wie viel Speicherplatz ein 5 Meter PNG File benötigt, das Daniel für seinen DIY Arcarde Automaten gebaut hat, wundern uns über LED-Leuchten an, die wie echter Himmel aussehen sollen und freuen uns über den “Digitalen Alltag als Experiment”.
I like playing arcade games. I’ve had an “arcade” in my home town and I used to go there after school quite frequently. It was a small place – maybe 5 machines and some pinball machines.
In february this year it occured to be that with the power of the Raspberry Pi and a distribution called RetroPie I could build something that would bring back the games and allow me to play/try those games I never could because my arcade was so small back in the days.
With their basic plans I started drawing in Inkscape and told my father about the plan. He was immediately in – as the plan now was to not build one but two bartop arcade machines. He would take the task of carrying out the wood works and I would do the rest – procurements, electronics, wiring, design and “painting”.
While I took the Holbrook Tech schematics as a base it quickly came apparent that I had to build/measure around the one fixed big thing in the middle: the screen.
I wanted something decently sized that the RaspberryPi would be able to push out to and that would require no maintenance/further actions when installed.
To find something that fits I had my requirements fixed:
between 24″ – 32″
colour shift free wide viewing angle
takes audio over HDMI and is able to push it out through headphone jack
I eventually settled for a BenQ GW2780 27″ monitor with all boxes ticked for a reasonable price.
After the monitor arrived I carried it to my fathers house and we started to cut the bezel as a first try.
After some testing with plywood we went for MDF as it was proposed by others on the internet as well. This made the cutting so much easier.
We went with standard 2cm MDF sheets and my father cut them to size with the measurements derived from the monitor bezel centerpiece.
Big thanks to my father for cutting so much wood so diligently! The next days he sent me pictures of what he’d made:
The side panels got a cut around for the black T-Molding to be added later.
electronics and wiring
After about 2 weeks my father had built the first arcade out of sheets of MDF and I had taken delivery of the remaining pieces of hardware I had ordered after making a long list.
It contains 2 standard 4/8-way switchable arcade joysticks, 10 buttons, all microswitches required and the Ultimarc I-PAC-2 joystick encoder.
So when I got the first arcade from my father I started to put in the electonics immediately.
The sound was a bit more complicated. I wanted a volume control knob on the outside but also did not want to disassemble any audio amplifier.
I went with the simplest solution: A 500k Ohm dual potentiometer soldered into the headphone extension cable going to the amplifier. The potentiometer then got put into a pot and a whole made it stick out so that a knob could be attached.
The RaspberryPi set-up then only lacked cooling. The plan was to put a 120mm case fan to pull in air from the bottom and went it out another 120mm case hole at the upper back. Additionally the RaspberryPi would get it’s own small 30mm fan on top of it’s heatsink case.
I attached both fans directly to the RaspberryPi – so I saved myself another power supply.
Now I had to make it all work together. As I wanted to use RetroPie in the newest 4.6 release I’ve set that up and hooked it all up.
On first start-up EmulationStation asked me to configure the inputs. It had detected 2 gamepads as I had put the IPAC-2 into gamepad mode before. You can do this with a simple mode-switch key-combination that you need to hold for 10 seconds to make it switch.
The configuration of the buttons of the two players went without any issue. First I had set-up the player 1 input. Then I re-ran the input configuration again for player 2 inputs.
The controls where straight forward. I wanted mainly 4-way games but with enough buttons to switch to some beat-em-ups at will.
So I configured a simple layout into Retroarch with some additional hotkeys added:
I tossed around several design ideas I had. Obviously derived from those games I wanted to play and looked forward to.
There was some Metal Slug or some Cave shooter related designs I thought of. But then my wife had the best ideas of them all: Bubble Bobble!
So I went and looked for inspiration on Bubble Bobble and found some but none that sticked.
There was one a good inspiration. And I went to design based upon this one – just with a more intense purple color scheme.
I used Inkscape to pull in bitmap graphics from Bubble Bobble and to vectorize them one by one, eventually ending up with a lot of layers of nice scalable vector graphics.
With all design set I went and sliced it up and found a company that would print my design on vinyl.
With the final arcade-wood accessible top me I could take actual measurements and add to each element 4cm of margin. This way putting it on would hopefully be easier (it was!).
Originally I wanted to have it printed on a 4m by 1,2m sheet of vinyl. It all would have fit there.
But I had to find out that Inkscape was not capable of exporting pixel data at this size and a pixel-density of 600dpi. It just was too large for it to output.
So I had to eventually cut all down into 5 pieces of 1,2m by 80cm each.
After about 7 days all arrived printed on vinyl at my house. I immediately laid everything out and tried if it would fit. It did!
Now everything had to go onto the wood. I did a test run before ordering to check if it would stick securely to the wood. It did stick very nicely. So putting it on was some intense fiddling but it eventually worked out really really great.
Now it was time for some acrylic. I wanted to get a good bezel and covering of the monitor as well as the handrest and the front buttons.
Cutting acrylic myself was out of questions – so I went with a local company that would laser-cut acrylic for me to my specification.
I’ve sent them the schematics and measurements and the panels for reference and 4 days later the acrylic arrived. We could then put the last bits together for completion!
I am really happy how this turned out – especially since with everything that required actual work with hands I am a hopeless case. With this somehow everything worked out.
I still employ the idea of a vertical shoot-em-up centered version… but maybe some day.
Wir haben uns wieder zusammengefunden – diesmal mit unserem Gast Philipp von nerdbude.com – und haben über Tastaturen, Github Arctic Vault, OCRmyPDF und einen selbstgebauten Arcade Automat gesprochen.
Wie schon bei der letzten Folge 23 haben wir zusätzlich zur Tonspure eine Videospur aufgezeichnet – allerdings nicht als “Talking-Heads” Episode sondern während wir über die Themen sprechen versuchen wir die Themen mit zusätzlichem Inhalt zu unterfüttern – Links und Bilder eben.
So this is interesting: Normally a Windows program (executable) if you try to run it anywhere else will show a message “cannot be run here” and terminates.
Printing this message is actually done by a little program whos task is to only print out this very message. So it can be overwritten.
Michael Strehovský did exactly this, very impressively. He documented what he did to get the game “snake”, written in C#, running on DOS instead of the “does not run here” stub. In an executable file that would run both, on standard 90s MS-DOS as well as on Windows with the .NET Framework installed.
He used a quite elaborate toolchain – namely DOS64-stub.
You can read all of this in the full thread. I recommend a deeper dive, as it’s a great start to better understand the inner workings of your computer…
This is so much nicer! Of course this has to be taken with the addition of: there are several “jokes” hidden in the names and lines. Don’t take this as an actual reference – rather go by the official ones.