I am currently contemplating the development of a mobile application that allows users to discover and collect various Japanese cultural stamps, such as 駅スタンプ (eki stamps), 御朱印 (goshuin), and 鉄印 (tetsuin). Additionally, this app will enable users to share their collections. My plan is to utilize OpenStreetMap data and provide functionality for users to contribute new stamp locations to the OSM database directly from the app. I have prepared a comprehensive “vision-readme” document that outlines the initial version of the application, detailing various aspects like functionalities, design considerations, and target audience.
I am seeking support as I currently lack expertise in adding structured data to OSM. My experience with OSM data and app development includes hosting my own Overpass server with a full global dataset. This server supports two iOS mobile applications I developed: (1) miataru and (2) Toilets around me.
I am in the research and conceptualization phase and am looking for collaborators interested in contributing to the concept, implementation, and operation of this project.
You can find more details on the vision and concept here:
Overview
EkiStamp Quest is an engaging mobile application designed for travelers in Japan. It’s a perfect companion for those who enjoy collecting unique Eki Stamps from train stations and tourist spots across the country. The app also supports the collection of Goshuin and Tetsuin, catering to a wide range of cultural enthusiasts.
Goshuin are traditional seals collected at temples and shrines, symbolizing a visit and prayer. Tetsuin are railway station-specific stamps, often celebrating historic or scenic railway lines. EkiStamp Quest offers a fun and interactive way to explore and appreciate Japan’s cultural landmarks, including temples, shrines, and railway stations.
Features
Stamp Locator: Utilize your location to discover nearby tourist spots, train stations, temples, and shrines with Eki Stamps, Goshuin, and Tetsuin.
Interactive Map: Navigate through different regions and find locations offering these cultural stamps and seals.
Collection Tracker: Keep track of the stamps and seals you’ve collected and the locations you’ve visited.
Stamp and Seal Information: Access detailed information about each stamp and seal, including their design, station history, and cultural insights.
Community Sharing: Share your collection with others and explore collections from various users.
Rewards and Challenges: Engage in challenges such as stamp rallies and historic railway journeys to collect special stamps and earn rewards.
In-App Cropping Tool: Save and personalize your stamp collection with a cropping tool, allowing for cut-out versions of stamps.
Customizable Collection Books: Choose from various designs to display your stamp collection in a style that suits you.
Social Media Integration: Easily share your stamps, overlaid on personal photos, on social networking sites.
Stamp Rally Participation: Join stamp rallies organized by different locations or operators, adding an exciting dimension to your collection experience.
EkiStamp Quest enriches the cultural experience of its users, enabling them to delve into and appreciate the diverse aspects of Japanese heritage through the collection of unique stamps and seals from various locations. This app transforms the traditional hobby of stamp collecting into an interactive and memorable journey through Japan’s rich cultural landscape.
Contact
To ask any question or offer help, please contact me through the comment function of this blog or by email: bietiekay -at- gmail.com
I have really waited this out. Some “galaxy-sized brains” tell us for decades now that virtual reality is the next big thing. And it might as well have been.
Almost nobody (me included) cared to even try – and with good reason: There’s no way to transport the experience that virtual reality creates in an easy way. Language and “flat-screen-video” is not enough. Even any 3D video is not going to come even close to deliver.
And I knew this was the case. Apart from a 30-second rollercoaster ride years ago I never had any direct contact with virtual reality technology until late this year 2022.
I did of course read about the technology behind all this. About the rendering techniques and the display – sensor – battery – processing hardware. I had read about the requirements for many-frames-per-second to have a believable and enjoyable experience. Would the hardware not be fit for the job the papers said: You will feel sick, very fast.
So I hesitated for years to purchase anything related to this. I wanted to “wait it out” as I had calculated the average spending required for a good set of hardware and software would easily roam into 2k-5k euro territory.
This year the time had come: the prices where down significantly for all components needed. Even better: There where some new hardware releases that tried to compete with existing offerings.
Of course the obvious thing to do would have been to purchase either a Valve Index or some Oculus,eh, Meta VR headsets. But that would have easily blown any budget and actually none of these is technologically interesting in End-2022.
CPU+GPU inside – the headset needs to be able to work stand-alone for video playback and gameplay
battery for at least 1-2 hour wireless play
touch+press controllers
capable of being used as SteamVR / PCVR headset – wireless and wired
Pancake lenses (as in “no fresnel”)
do-not-break-bank price
And what can I say. There was at least one VR headset released in december 2022 that fit my requirements: PicoXRs PICO 4 headset.
Pico 4 VR headset
So I went ahead and purchased one – which was delivered promptly. It came with charger, USB-C cable, two controllers and the headset itself. The case I got in addition to carry it around and safely store it when not in use.
At first I tried only applications and games that can be run directly on the headset. Of course some video streaming from YouTube and the likes. There is VR/180/360 content readily available with a huge caveat: I quickly learned that even 8K video is not enough pixels when it’s supposed to fill 360 degress around you. 8K video is rather the minimum that starts to look good.
Then there’s formats of videos. Oh god there are formats. I’d probably spend another blog article just on video formats for VR and 180 or 360 degree formats. Keep in mind that you can add 3D to the equation as well. And if you want decent picture quality you see yourself easily pushing 60 frames of 8K (or more) times 2 (eyes) through to the GPU of the little head mounted displays. The displays can do 2160×2160 per eye. So you can imagine how much video you should be pushing until the displays are at their potential. And then think: 2160 per eye is NOT yet a pixel-density that you would not be able to see pixels sometimes. I do not see a screen-door-effect and the displays are really really good. But more pixels is…well more.
Anyways: There’s plenty of storage on the device itself so on the next airplane trip I can look funny with the headset on and being immersed in a movie…
Or a remote desktop session:
After about a week of testing and playing (Red Matter 1 for example…) I was convinced that I’d like the technology and the experiences it offered.
The conclusion after the first week was as good as I could have hoped with the first 500 euro investment done: I would not get sick moving around in VR. I would enjoy the things offered. I was convinced that I was able to experience things otherwise not possible.
And I was convinced that I could not have come to any conclusion when not actually having owned such a headset and tried myself. It’s just not possible to describe to you what the feeling of being able to walk into a 3-dimensional world that gets rendered by a computer and fools your brain so well. Of course it’s NOT reality. That’s not the point. I do not feel like going to the holo-deck. But it feels like computer games become “3D touchable”. In virtual reality games there is a lot more going on than in non-VR games. And that’s the main reason that there are not more good VR games. It’s hard to build an immersive, believable game world. It’s real effort and I named Red Matter specifically because it was one of the most immersive and approachable puzzle, non-stressing games I have played.
Being convinced brought up the question: Now what?
Until this point there was no computer in our household that could even dream of powering a modern virtual reality PCVR experience. But there was one Windows PC which I could use to do the due dilligence for “what to buy” and if it at all would work as I wanted.
What did I want?
a set-up that would allow me to play any modern PC VR game
play the games with at least high details and with framerates and resolutions that would not make me sick
no wired connection to the computer necessary
ideally the computer would not even be in the same room or floor
So I had to do some testing first to figure out if the most basic requirements would work. So I purchased “Virtual Desktop” on the headset built-in store and installed the streamer app on the one Windows PC in the household that had a very old dedicated GPU.
I did the immediate extreme test. The computer connected to the wired network in the house. The headset connected to the house wifi shared with 80+ other devices. And it worked. It worked beautifully. Just out of the box with my mediocre computer I had the desktop screen of the computer floating in front of me. I was able to launch applications and I was even able to run simple 3D VR applications like Google Earth VR. I literally only had Steam and Virtual Desktop installed, clicked around and got the earth in front and below me in no time.
Apparently the headset was smart enough to connect to the 5ghz Wifi offered in addition to the crowded 2.4ghz. Latencies, bandwidth all in good shape.
To make things just a bit more forseeable I’ve dedicated a mobile access point to the headset. My usual travel access point (GLinet OPAL) apparently works quite well for this purpose.
It’s connected to the house wired network and creates an access point just for the headset. The headset then has reliable 500+ Mbit/s access to any computer in the household.
After some more playing around and simulating some edge case scenarios I came to the conclusion that his would work. I would not even have to touch a computer to do all this. It could all be done remotely over a fast-enough network connection.
After consulting with my knowledgable brother-in-law I then settled for a budget and had a computer built for the purpose of VR game streaming. After about 2k Euros and 2 weeks of waiting I received the rig and did the most reasonable thing: Put it in the server room in the house basement where it’s cool and most importantly far enough away from my ears.
Ryzen 5800x + 3070ti
So this one is closed up and sitting in the server room. The only thing other than power and ethernet that is plugged into the machine is an HDMI display emulator dongle:
The purpose of this HDMI plug without anything connected to it is to tell the graphics card that there’s display connected. It even tells the graphics card about all those funky resolutions that ghostly display can do… When there’s nothing connected to the HDMI ports the only resolutions that you can work with out-of-the-box are the default resolutions up to 1080p. This device enables you to go beyond 2160p.
I did a bit of setting-up for wake-on-lan and some additional fall-back remote desktop services in case something fails.
To wake-up the machine it’s sufficient to send the “magic packet” – either through the remote play client built-in features (Moonlight can do it…) or through the house-internal dashboard:
yes, the computer is called “Valerian” – the machine I am mostly using to control it (other than the headset) is called “Laureline”…go figure!
streaming games
For VR game streaming it’s as I had tested beforehand: Steam + Virtual Desktop doing their thing. Works, as expected, very pleasently even with high/ultra details set.
The machine can also be used to play normal non-VR games. For this I am using the open source Sunshine (server) / Moonlight (client) combination with great success.
I can either just open up the Moonlight app on my iPad, iPhone, RaspberryPi or Mac computer and connect to the computer in the basement and use it with 60-120fps 1080p to 4k resolutions without even noticing that there is no computer under the desk…
Oh – I do notice that there’s no computer under the desk because of the absence of any noise while using it.
What I have found is really astonishing for me – as I was not expecting a that well integrated and working solution without having to solve problems ahead.
Virtual Reality games are just working. It’s like installing, starting, works. The biggest issue I had run into was the controllers not being correctly mapped for the game – easily solvable by remapping.
I “upped” the stakes a bit a couple of days ago when I installed OBS Studio to live stream my VR session of playing Red Matter 2 (the sequel…).
resized screenshot of one eye of an impressive scene of the game Red Matter 2. Just having landed on the Neptune moon Triton.
Nice hand tracking…
After installing OBS and setting up the “capture this screen” scene it was very nice to see that not only did OBS record the right displays (when set right) but out of the box it recorded the correct audio AND the correct microphone. Remember: I am playing in a specific room at the top floor of my house. Using the awesome tracking of the head-set for room-scale VR to the fullest.
The computer in the basement means that the only connection from headset to the computer is through Virtual Desktop – 5ghz WiFi – Ethernet – Virtual Desktop Streamer.
I did not expect a microphone to be there but it is. I did not expect the microphone to work well. But it does. I did not expect the microphone being seamlessly forwarded to the computer in the basement and then OBS effortlessly picking it up correctly as a separate microphone for the twitch streaming. I was astounded. It-just-worked.
adding an (usb) gamepad
After a bit of fooling around, especially with standard PC games I found that some games make me miss a game pad. It was out of the question to connect a gamepad directly to the computer the games ran on – that one was in the basement and no USB cable long enough.
I remembered playing with USB-over-IP in recent years just for fun but also remembered not getting it to work properly ever. After investigating any hardware options I decided to give software another look.
Apparently a company called “VirtualHere” had seen their chance since I played around the last time. They offer a server and client software that seemingly can run anywhere.
So I picked an old RaspberryPi 1 out of the drawer and flashed a fresh version of RaspberryPi OS. Booted it up and copied the one Linux ARM7 binary over that VirtualHere offers. It started without issues and further dependencies.
On the Windows Machine you also only have to run a simple application and it’ll scan the network for “VirtualHere USB hubs”.
For me it immediately showed up the RaspberryPi as an USB hub. I plugged in my old Xbox 360 wireless receiver and it showed up and connected on Windows. When I then powered up an Xbox 360 wireless controller it made the well known Windows “device plugged in” sound and I had a working gamepad ready to use in Windows – all over the network.
I cannot notice any added latency for the controller. And essentially anything I had plugged into the USB ports of the RaspberryPi could immediately be used/mounted on the computer in the basement all over the already existing network.
It cannot be overstated how little hassle this solution was over any other way I know and would have tried. The open source USB/IP project is still there and seems to work on modern Windows BUT you have to deal with driver signing and security issues yourself.
VirtualHere does cost money but it’s at least not a subscription but a perpetual license you can purchase after trying out the fully functional 1-device versions. For me it now brings working USB-over-my-existing-network to any device I want around the house. There are some other uses I will look into – like that flatbed scanner I have. That camera that can now connect anywhere via USB… so many options…
conclusion
I went head-first into the virtual reality rabbit hole and it’s quite fun so far. The costs of this came down far enough and I was able to learn a lot of things I would otherwise not have been able to. Looking into the technology-side of how all this comes together and how latencies add up, build or ruin an experience is remarkable.
If you want to get a (albeit clumsy and not 3D) look of what one of the many options to do in VR is – take a look at a VR session recording from two days ago:
Bonus: The GLinet OPAL travel router does have 1 USB port. And you can run the USB VirtualHere hub software as an MIPSEL binary on there and you would not need the RaspberryPi anymore. The only thing you must figure out yourself is how to route the traffic out the right ports.
With a „little bit of data sharing“ signed-off just about a couple of seconds after you paid your purchase a PDF file of the receipt you would have gotten at the cashier is in your eMail inbox.
That receipt is a fairly small PDF file looking just like the paper-receipt would have looked like. Additionally you can opt out of the paper receipt – which means less wasted paper as it‘s not even printed out at the cashir.
I had accumulated two years of groceries shopping – over 100 receipts until I finally sat down and coded a parser that takes the pdf-receipt, parses it and publishes the results to respective MQTT topics.
When you start the program this way it’ll go through all pdf files in the directory you point it at. If it finds REWE eBons it will read and parse them in.
It will then order the eBons by date and output all of them in the correct timely order to MQTT.
Then it will start watching the directory for any changes and new files. It’ll pick up those files automatically, read them in and send the data to MQTT of the receipt date is newer than the last one seen and sent.
I‘ve wrapped all of it so it would come with a Dockerfile and can be run anywhere where you‘ve got docker up and running.
Now what do I do with this you may ask?
Let me show you an example:
cabbage, milk and pepsi prices plotted out… ignore the hour times – this is from a test import
How I use this is: the tool is running all the time and watching a directory. Whenever a new .PDF file shows up in this directory it will automatically be parsed and it‘s contents pushed out through MQTT. Each item essentially in it‘s own separate topic with price, quantities etc.
Using a combination of Telegraf (to get the data from MQTT to InfluxDB) and InfluxDB (to store time-series) and Grafana (to query and show graphs).
This way it‘s trivial to plot the price development of groceries you regularly purchase. It‘s easy to see what you might have missed.
I am still drilling into the data and there‘s lots of things you can do with it.
after unzipping right click the folder and select „new Terminal at Folder“. If your menu does not show this item just open a Terminal (search for Terminal) and navigate to the folder you unpacked the binary release to („cd Downloads“)
Then mark the irl-obs-switcher executable by running „chmod +x irl-obs-switcher“. Then try to run it with ./irl-obs-switcher. On current macOS you might get a pop-up warning you about the file you are trying to run. This is a default warning as the binary release of irl-obs-switcher is not signed/approved by Apple but just made available by the developer (me) to you. Choose „Cancel“ as might not want to move it to the recycle bin just yet.
Next we need to tell macOS to anyway allow us to run the irl-obs-switcher file by going to the „Security&Privacy“ section of the System Settings.
You will see a button „Allow Anyway“ that you can click to allow running of irl-obs-switcher.
Now when you try to run irl-obs-switcher again the warning will look different. Click „Open“ and you‘re good to go.
Ever since I stumbled across several IRL streamers I was intrigued by the concept of it.
IRL or “in-real-life” is essentially the art of streaming everyday life. For hours and totally mobile. Of course there are some great gems in the vast sea of content creators. One of them – robcdee – streams for hours live almost every day and shows you his way around in Japan.
Apart from the content – Japan is great – the technical side of this IRL streaming set-ups is quite interesting. Imagine: These streamers wander around with usually a backpack filled with batteries, several modems (4G/5G…) that load balance and bundle a 2-6 Mbit/s video+audio stream that gets sent to a central server either through SRT or RTMP protocol. This central server runs OBS Studio and receives the video stream offering the ability to add overlays and even switch between different scenes and contents.
After I had a basic understanding of the underlying technologies I went ahead and started building my own set-up. I do have plenty of machines with enough internet bandwidth available so they could be the host machine of OBS Studio. I wanted all of this live in a nice docker container.
I went ahead and built a docker container that is based upon the latest Ubuntu 21.04 image and basically sets up a very minimal desktop environment accessible over VNC. In this environment there is OBS Studio running and waiting for the live stream to arrive to then send out to Twitch or YouTube.
How I have set-up this docker desktop environment exactly will be part of another blog article.
look at this nice OBS Studio running on Linux inside a Docker Container on a root server on the other side of the country…
So far so good. OBS offers the ability to define multiple scenes to switch between during a live stream.
These IRL streamers usually have one scene for when they are starting their stream and two more scenes for when they are having a solid connection from their camera/mobile setup and when they are currently experiencing connection issues.
All of the streamers seemingly use the same tooling when it comes to automatically switch between the different scenes depending on their connectivity state. This tool unfortunately is only available for Windows – not for Linux or macOS.
So I thought I give it a shot and write a platform independent one. Nothing wrong with understanding a bit more about the technicalities of live streaming, right?
It runs on Linux, Windows, macOS as I have used the .NET framework 6.0 to create it. It is all open source and essentially just a bit of glue and logic around another open source tool called “netproxy” and OBS WebSocket.net.
My tool basically runs on all sorts of platforms – including Linux, Windows and macOS. I run it inside the docker container with the OBS Studio. It essentially proxies all data to OBS and monitors wether or not the connection is established or currently disconnected. Furthermore it can be configured to switch scenes in OBS. So depending on wether there is a working connection or not it will switch between a “connected” and “disconnected” scene all automatically.
the “connected” scene configured as an SRT media source
So when you are out and about live streaming your day this little tool takes care of controlling OBS Studio for you.
the “disconnected” scene configured to play a nice beach sunset and quiet music to calm people down as the live streamer reconnects…
Heute ist der Tag an dem die FeM e.V. ihren 25. Geburtstag feiert.
Die Forschungsgemeinschaft elektronische Medien e. V. (FeM) ist einer der größten studentischen Vereine an einer Hochschule in Thüringen. Gegründet wurde der Verein im Jahr 1997 im Umfeld der TU Ilmenau. Er umfasst derzeit circa 2.000 Mitglieder und betreibt das größte selbstverwaltete studentische Netzwerk Thüringens. Über verschiedene Streamingprojekte erreichte der Verein auch außerhalb Thüringens Bekanntheit.
Ich war länger als eine Regelstudienzeit mit dabei und durfte so viele tolle Sachen machen, ausprobieren und mit gestalten dass es mein Leben durch und nach FeM ein anderes war.
Gleich zu Anfang habe ich mir meinen Stream so eingerichtet dass jeweils der aktuelle Spieler-Counter immer im Bild zu sehen war. Ich finde das einfach eine ganze witzige Information vor allem für LuckyV-Interessierte.
rechts oben – die aktuelle Zahl der gleichzeitigen Spieler auf LuckyV
Meine ursprüngliche Implementierung war etwas kompliziert – zu kompliziert um sie einfach mit anderen zu teilen.
Daher habe ich mich entschlossen den Zähler in eine eigene Windows Applikation zu verpacken die von Streamern einfach verwendet und in OBS eingebunden werden kann.
Starten und prüfen ob die Zahl auch angezeigt wird – es sollte ungefähr so aussehen:
Man kann das nun auf zwei Wegen einbinden.
Weg 1: Fensteraufnahme
In der Applikation kann man Hintergrundfarben sowie Schriftart und Farbe konfigurieren. Wenn man das erledigt hat wie man es haben will wählt man im Quellenmenü “Fensteraufnahme” und dann das Applikationsfenster.
Fensteraufnahme
Diese Quelle kann man dann wie man möchte konfigurieren. z.B. mit Filtern um bis auf die Schrift alles transparent zu gestalten oder oder oder…
Weg 2: playercount.txt
Wenn die Applikation läuft aktualisiert sie ständig eine Datei “playercount.txt” im gleichen Ordner. Man kann nun OBS so konfigurieren dass diese Datei regelmässig ausgelesen und angezeigt wird.
Dazu fügt man ein “Text (GDI+)” im Quellenmenü hinzu und konfiguriert diese Quelle so dass der Text aus einer Datei gelesen wird:
Hier kann man dann auch beliebig Schriftart, Größe und Farbe konfigurieren.
With the release of the M1 iPad Pro I had decided to order one to replace my aging iPad mini 4 from 2015.
And so far I am very happy with it. I‘ve got it with the Apple Smart Folio which comes with this strange material that seems to collect dust like crazy. At least it seems to do it‘s job to protect the device.
The Smart Folio allows you to prop up the iPad either fully up or lay it down onto a table at an angle,
Embedded into the smart folio are magnets on both sides. It even depends solely on magnetic force to attach to the iPad Pro.
Now I‘am regularly typing and using a mouse with iPad OS. Which works great with the normal bluetooth keyboard and mouse from Logitech that I had around still. But those desktop peripherals are quite heavy and big devices compared to what you would want with such a mobile device like the iPad Pro.
There are multiple options that replace the Smart Folio with keyboard and touchpad combos. From Apple, from Logitech and of course the usual suspects from China. Those combos all have different downsides for me. For example:
Apple Magic Keyboard
enormous price
No F-keys
Heavy
Logitech Folio Touch
does not attach magnetically but puts the iPad into a bumper frame
As thick as it gets
Kind-of pricey for keyboard and touchpad
Both of the above options require the iPad Pro to be always connected to the case/keyboard. This limits the angles and the distance I can put the iPad to use it. It limits how I can use the keyboard and in what positions I can type. Both of them also connect directly to the iPad Pro through the back-connectors to be powered and data-transfer.
None of this is a good thing for me. I want a keyboard+touchpad that I can basically put at whatever distance I want in front of the screen and put the screen anywhere I need to be able to work comfortably. Everything being too tightly integrated and requiring to be always wired up to even work is a big downside for me.
So I started to look around and quickly found lots of options of keyboard/touchpad combinations that are self-powered and actually also already available for years.
With some research I settled to purchase one that ticked all boxes for me:
The haptic feeling when typing should be bearable, ideally it should be like a good notebook keyboard
The touchpad should support multi-touch gestures and work well with iPad OS – that is a really hard thing to achieve – it seems
Bluetooth 5.0 connection that does not interfere with WiFi
very light, yet has to have enough battery for hours of use
Needs to attach somehow to the iPad case while not in use yet needs to be detached physically from the iPad while in use
Needs to support all normal keys you would need on Linux console or while programming, including the F-keys.
cheap?
This is how it looks like while in use:
As you can see it‘s not actually attached to the iPad but just there ready to be used.It‘s a fair size – remember: this is an 12.9 inch iPad next to it.
All the above checkboxes are ticked as the the keyboard feels well while typing. It has F-keys and even offers switchable layouts for different use-cases. All my programming and console needs are fulfilled.
It’s insanely light – feels almost too light. But the backside is thin metal which is magnetic. And yes. It just attaches to the outside of the original Apple Smart Folio that I already had. It literally just snaps onto it and stays there while being moved from one place to the other.
With the flexibility of the original Smart Folio I can now put the iPad onto the couch table and sit comfortably on the couch while typing and using the touchpad with the stable small keyboard on my lap.
Since it comes with it‘s own battery (I have it for 1 week and I was unable to empty it) its a bonus that charging takes place through a USB-C port. Most other cheap keyboard/touchpad combinations come with a Micro-USB port for charging. Even in 2021.
I could not resist to open it right up. There are 8 screws at the bottom that can easily be opened.
Look how easy it will be to replace the battery one day. This is a basic off-the-shelve battery pack that is cheap to replace when faulty.
Now while I can recommend this keyboard for the iPad Pro I cannot tell you where you can get it. I‘ve ordered mine on Amazon but while I was writing this article I was unable to find and link the product page. It got removed apparently.
So my only recommendation would be: Go for a hunt for keyboards with similar options. Mine also has key backlights with different colors – which nobody needs for any reason. But if you go for the hunt. Look out for keyboard touchpad combinations that offer Bluetooth 5.0 and USB-C for charging. Compare the pictures as the keyboard layout was quite unique (T-cursor keys, F-keys,…) .
Found that nice heap of Icons that are free to use and high-quality:
Health Icons is a volunteer effort to create a ‘global good’ for health projects all over the world. These icons are available in the public domain for use in any type of project.
The project is hosted by the public health not-for-profit Resolve to Save Lives as an expression of our committment to offer the icons for free, forever.
The only meter in our house that I was not yet able to read out automatically was the water meter.
With the help of a great open source project by the name of AI-on-the-edge and an ESP32-Camera Module it is quite simple to regularly take a picture of the meter, convert it into a digital read-out and send it away through MQTT.
The process is quite simple and straightforward.
Flash the ready made Firmware image to the module
Configure the WiFi using a SD card
Put the module directly over the meter
Connect to it and setup the reference points and the meter recognition marks
As you can see above all the recognition is done on the ESP32 module with its 4MByte of RAM.
With the data sent through MQTT it’s easy to draw nice graphs:
Since a picture says more than a thousand words, I give you the result first:
my crkbd based keyboard
This keyboard design is made from the ground up as open source and naturally is fully available as a GIT repository containing everything you need to start: PCB schematics, drawing, documentation and firmware source code.
It took me a couple of months to get all the required parts ordered and delivered. Many small envelopes with parts that seemlingly are only produced by a handful of manufacturers. But anyways: After everything had arrived and was checked for completeness my wife took the hardware parts into her hands and started soldering and assembling the keyboard.
And so this project naturally is split up between my wife and me in the most natural (to us) way: My wife did all the hardware parts – whilst I did the software and interfacing portion. (Admittedly there only was to be figured out how to get the firmware compiled and altered to my specific needs)
After putting the hardware together it was time to get the firmware sorted as well. This keyboard design is based upon the open source QMK (Quantum Mechanical Keyboard) firmware.
Conveniently QMK comes with it’s own build tools – so you will be up and running in no time. Since I had purchased Arduino ProMicro controllers I was good with the most basic setup you can imagine. As the base requirements for the toolchain where minimal I went with the machine that I had in front of me – a Raspberry Pi 4 with the standard Raspberry Pi OS.
These where the steps to get going:
get Python 3 and the qmk tool installed – I’ve chosen not to use the tool setup procedure but instead go with a separate clone of the QMK firmware repository.
python3 -m pip install --user qmk
clone the QMK firmware repository and get the QMK tool running (in the /bin folder of the firmware repository – it’s actually just a python script)
git clone https://github.com/qmk/qmk_firmware.git
cd qmk_firmware
git submodule sync --recursive
git submodule update --init --recursive --progress
make crkbd:default
create your own keymap to work with. You gotta use the crkbd firmware options as a default for this keyboard. The command below will generate a subfolder with the name of your keymap in the keyboards/crkbd/keymaps folder with the default settings of the crkbd keyboard firmware.
qmk new-keymap -kb crkbd
build your first firmware by running the command below (note: btk-corne is the name of my keymap)
now you can flash the firmware to both ProMicro controllers. The most straight forward way for me was using avrdude on the commandline. In my case the device is added as /dev/ttyACM0 and the compiled firmware named crkbd_rev1_legacy_btk-corne.hex.
When you got all this information you need to plug in the ProMicro and trigger a reset by bridging Ground and the Reset Pin. If you added, like we did, a button for reset you can use this. After hitting reset the ProMicro bootloader will enter the state where it’s possible to be flashed. Reset it and THEN run the avrdude commandline.
(alternatively) you can also use QMK Toolbox to flash the firmware. Also works.
So now you know how to get the firmware compiled and running (if not, look here further). But most probably you are not happy with some aspects of your keymap or firmware.
By now you might ask yourself: Hey, I’ve got two ProMicros on one keyboard. Both are flashed with the same firmware. Into which of the two do I plug in the USB cable that then is plugged into the computer?
The answer is: by default QMK assumes that you are plugging into the left half of the keyboard making the left half the master. If you prefer to use the right half you can change this behaviour in the config.h file in the firmware:
You have to plug in both of them anyway at times when you want to flash a new firmware to them as you adjust and make changes to your keymap.
Thankfully QMK comes with loads of options and even a very useful configurator tool. I used this tool to adjust the keymap to my requirements. The process there is straightforward again. Open up the configurator and select the correct keyboard type. In my case that is crkbd/legacy. The basic difference between legacy and common is a different communication protocol between the two halves. This really only is important when features are used that require some sort of sync between the two haves – like some RGB LED effects. Since I did not add any LEDs to the build I go with legacy for now. Maybe I need some features later that require me to go with common.
The configurator allows you to set up the whole keymap and upload/download it as a .json file.
That .json file can easily be converted into the C code that you need to alter in the actual keymap.c file. Assuming that the .json file you got is named btk-corne.json the full commandline is:
qmk json2c btk-corne.json
Then simply take this output and replace the stuff in the keymap.c with it:
Now you compile and flash again. And if all went right you’ve got the new keymap and firmware on your keyboard and it’ll work just like that :)
Disclaimer: I’ve joined for fun and not for profit – this is a new hobby.
For about a year now I was regularly watching some Twitch streamers go along their business and it spawned my curiousity when some of them started to do something they called “GTA V roleplay”.
Grand Theft Auto V (GTA V) is a 2013 action-adventure game developed by Rockstar North and published by Rockstar Games. Set within the fictional state of San Andreas, based on Southern California, the open world design lets players freely roam San Andreas’ open countryside and the fictional city of Los Santos, based on Los Angeles. The game is played from either a third-person or first-person perspective, and its world is navigated on foot and by vehicle.
So these streamers where mostly using an alternative client application to log into GTA V online servers that where operated by independent teams to play the roles of characters they created themselves. It started to really get interesting when there is dynamics and interactions happening between those characters and whole stories unfold over the course of days and weeks.
It’s great fun watching and having the opportunity to sometimes see multiple perspectives (by multiple streamers) of the same story and eventually even to be able to interact with the streamers communities.
One such fairly big german server is LuckyV. It’s an alternative GTA V hardcore role-play server creates by players for players.
The hardcore here means: the characters are supposed to act as much as possible like they would in the encountered situations in real life.
So in order to play on this server you have to create a character and the characters background story. You gotta really play that character when on the server.
When you play it’s not just a vanilla GTA V experience. There are lots of features that are specific to the server you are playing on. Some examples are:
Communication: you are communicating with people in your vicinity directly – you can hear them if they are close enough to be heard and you can be heard when you are close to people
Jobs: there’s lots to be done. Become CEO of your own company and manage it!
Social Interaction: there’s probably an event just around the next corner happening. You are able to meet people. Crowds of people even. Remember: There are usually no non-players. Every person you see it a real human who you can interact with.
The LuckyV community made a great overview page where you can watch other people playing and live streaming their journey. It’s extensive – over 200 streamers are online regularly and the screenshot below shows a mid-week day right after lunch…
LuckyV Streamer overview page
Anyhow. This is all great and fun but plot twist: I do not play it. (yet)
So what do I have to do with it except I am watching Streamers? Easy: Behind the game there’s code. Lots of code actually.
In a nutshell there’s a custom-GTA V server implementation that talks to a custom GTA V client. LuckyV is using the altV server and client to expand the functionalities and bring the players into the world.
It allows for 1000 simultaneous players in the same world at a time. So there could be 1000 people right there with you. Actually since LuckyV is about to have it’s first birthday the regular player numbers are peaking at around 450 simultaneous players in Los Santos at a time.
The whole set-up consists of several services all put together:
web pages for game overlays, in-game UI and administration tools (PHP)
a SQL database that holds the item, character etc. data
a pub/sub style message hub that enables communication between in-game UI, webpages and the gamemode
a TeamSpeak 3 server that allows players to join a common channel (essentially one teamspeak room) and a plug-in called SaltyChat that mutes/unmutes players in the vicinity and allows features like in-game mobile phone etc.
everything of the above is in containers and easily deployable anywhere you got enough hardware to run it – when there are 100s of players online the load of the machine grows almost linear – and the machine is doing it’s moneys worth then…
So after the team announced some vacancies through those streamers I watched I contact them and asked if I could help out.
And that’s how I got there working on both the gamemode code as well as helping the infrastructure become more stable and resilient.
For my first real contribution to the gamemode I was asked to implement secondary keys for vehicles as well as apartments/houses.
Up until now only the owner / tenant of the vehicle or apartment had access to it. Since this game is about social interactions it would be a good addition of that owner could hand out additional keys to those they love / interact with.
And that I did. I worked my way through the existing code base – which is a “grown codebase” – and after about 3 days of work it worked!
Most impressive for me is the team and the people I’ve met there. This current team welcomed me warmly and helped me to wrap my head around the patterns in the code. Given the enthusiast / hobby character this has it’s almost frightening how professional and nice everything works out. I mean, we developers had a demo-session with the game design team to show off what our feature does, how it works and to let them try it out to see if it’s like the envisioned it.
They even did a trailer for the feature I worked on! And it is as cheesy as I could only wished:
So far so good: It’s great fun and really rewarding working with all these nice people to bring even more fun and joy to players. Seeing the player numbers grow. Seeing streamers actually use the features and play with it – handing over keys to their partner. Really rewarding.
Like this example:
just at 2hrs 5 Min Ariane Barnes is handing over a key to her loved one.
In part 1 I wrote a bit about this great game and shared some screenshots. By now I’ve finished the story and almost all side-quests and I still see it as the best game since years.
Anyway, here are more pictures taken in-game (sometimes stitched):
For the first time in the last 10ish years I am back playing a game that really impresses me. The story, the world and the technology of Cyberpunk 2077 really is a step forward.
It’s a first in many aspects for me. I do not own a PC capable enough of playing Cyberpunk 2077 at any quality level. Usually I am playing games on consoles like the Playstation. But for this one I have selected to play on the PC platform. But how?
I am using game streaming. The game is rendered in a datacenter on a PC and graphics card I am renting for the purpose of playing the game. And it simply works great!
So I am playing a next-generation open-world game with technical break-throughs like Raytracing used to produce really great graphics streamed over the internet to my big-screen TV and my keyboard+mouse forwarded to that datacenter without (for me) noticeable lag or quality issues.
The only downside I can see so far is that sooo many people like to play it this way that there are not enough machines (gaming-rigs) available to all the players that want – so there’s a queue in the evening.
But I am doing what I am always doing when I play games. I take screenshots. And if the graphics are great I am even trying to make panoramic views of the in-game graphics. Remember my GTA V and BioShock Infinite pictures?
So here is the first batch of pictures – some stitched together using 16 and more single screenshots. Look at the detail! Again – there are in-game screenshots. Click on them to make them bigger – and right-click open the source to really zoom into them.