- Family and Friends
“Swagger is a specification and complete framework implementation for describing, producing, consuming, and visualizing RESTful web services. The overarching goal of Swagger is to enable client and documentation systems to update at the same pace as the server. The documentation of methods, parameters, and models are tightly integrated into the server code, allowing APIs to always stay in sync. With Swagger, deploying managing, and using powerful APIs has never been easier.”
An exciting new feature has been added to the Miataru service! It’s now possible to embed the location of a device into any website. Here’s an example:
It’s a pretty easy process. When your device is available on the standard public miataru service you only have to embed an iFrame into the website. Just like this:
<iframe width=”320″ scrolling=”no” height=”240″ frameborder=”0″ src=”http://miataru.com/client/embed.html#BF0160F5-4138-402C-A5F0-DEB1AA1F4216;Demo Device”></iframe>
Playing with OpenStreetMap resources lately I came to the point where I wanted to calculate routes between points based on the OSM data. Now there is GraphHopper to the rescue! It’s opensource and awesome!
“GraphHopper offers memory efficient algorithms in Java for routing on graphs. E.g. Dijkstra and A* but also optimized road routing algorithms like Contraction Hierarchies. It stands under the Apache License and is build on a large test suite.”
Source 1: http://graphhopper.com
It’s impressive what these browsers started to become these days. Here you have a quite convincing wave simulation right in your browser with some knobs to play with:
Useful advice is right ahead! This cheat sheet gives some very interesting thoughts and advices for your own or others business.
I particularly liked:
“21) Should I ever focus on SEO? No.
22) Should I do social media marketing? No.”
Some might know AmbiLight – a great invention by Philips that projects colored light around a TV screen based upon the contents shown. It’s a great addition to a TV but naturally only available with Philips TV sets.
Not anymore. There are several open-source projects that allow you to build your very own AmbiLight clone. I’ve built one using a 50-LEDs WS2801 stripe, a 5V/10A power supply, a RaspberryPi, and the BobLight integration in RaspBMC (this is a nice XBMC distribution for the Pi).
“Boblight is a collection of tools for driving lights connected to an external controller.
Its main purpose is to create light effects from an external input, such as a video stream (desktop capture, video player, tv card), an audio stream (jack, alsa), or user input (lirc, http). Boblight uses a client/server model, where clients are responsible for translating an external input to light data, and boblightd is responsible for translating the light data into commands for external light controllers.”
The hardware to start with looks like this:
I’ve fitted some heat-sinks to the Pi since the additional load of controlling 50 LEDs will add a little bit of additional CPU usage which is desperately needed when playing Full HD High-Bitrate content.
The puzzle pieces need to be put together as described by the very good AdaFruit diagram:
As you can see the Pi is powered directly through the GPIO pins. You’re not going to use the MicroUSB or the USB ports to power the Pi. It’s important that you keep the cables between the Pi and the LEDs as short as possible. When I added longer / unshielded cables everything went flickering. You do not want that – so short cables it is 🙂
When you look at aboves picture closely you will find a CO and DO on the PCB of the LED. on the other side of the PCB there’s a CI and DI. Guess what: That means Clock IN and Clock OUT and Data IN and Data OUT. Don’t be mistaken by the adapter cables the LED stripes comes with. My Output socket looked damn close to something I thought was an Input socket. If nothing seems to work on the first trials – you’re holding it wrong! Don’t let the adapters fitted by the manufacturer mislead you.
Depending on the manufacturer of your particular LED stripe there are layouts different from the above image possible. Since RaspBMC is bundled with Boblight already you want to use something that is compatible with Boblight. Something that allows Boblight to control each LED in color and brightness separately.
I opted for WS2801 equipped LEDs. This pretty much means that each LED sits on it’s own WS2801 chip and that chip takes commands for color and brightness. There are other options as well – I hear that LDP8806 chips also work with Boblight.
My power supply got a little big to beefy – 10 Amps is plenty. I originally planned to have 100 LEDs on that single TV. Each LED at full white brightness would consume 60mA – which brings us to 6Amps for a 100 – add to that the 2 Amps for the PI and you’re at 8A. So 10A was the choice.
To connect to the Pi GPIO Pins I used simple jumper wires. After a little bit of boblightd compilation on a vanilla Raspbian SD card (how-to here). Please note that with current RaspBMC versions you do not need to compile Boblight yourself – I’ve just taken for debugging purposes as clean Raspbian Image and compiled it myself to do some boblight-constant tests. Boblight-constant is a tool that comes with Boblight which allows you to set all LEDs to one color.
If everything is right, it should look like this:
Now everything depends on how your LED stripes look like and how your TVs backside looks like. I wanted to fit my setup to a 42″ Samsung TV. This one already is fitted with a Ultra-Slim Wall mount which makes it pretty much sitting flat on the wall like a picture. I wanted the LEDs to sit right on the TVs back and I figured that cable channels when cut would do the job pretty nicely.
To get RaspBMC working with your setup the only things you need to do are:
- Enable Boblight support in the Applications / RaspBMC tool
- Login to your RaspBMC Pi through SSH with the user pi password raspberry and copy your boblight.conf file to /etc/boblight.conf.
The configuration file can be obtained from the various tutorials that deal with the boblight configuration. You can choose the hard way to create a configuration or a rather easy one by using the boblight configuration tool.
I’ve used the tool 🙂
Now if everything went right you don’t have flickering, the TV is on the wall and you can watch movies and what-not with beautiful light effects around your TV screen. If you need to test your set-up to tweak it a bit more, go with this or this.
Source 1: http://en.wikipedia.org/wiki/Ambilight
Source 2: http://www.raspberrypi.org/
Source 3: https://code.google.com/p/boblight/
Source 4: http://www.raspbmc.com/
Source 5: http://learn.adafruit.com/light-painting-with-raspberry-pi/hardware
Source 6: How-To-Compile-Boblight
Source 7: Boblight Config Generator
Source 8: Boblight Windows Config Creation Tool
Source 9: Test-Video 1
Source 10: Test-Video 2
Quite an interesting read of things to have in mind when doing, you know… life 🙂
“Make time to pursue your passion, no matter how busy you are.“
A very interesting find that I wanted to blog about for a while now – loads of stuff to read and watch through – let it be art or history.
“Google has partnered with hundreds of museums, cultural institutions, and archives to host the world’s cultural treasures online.
With a team of dedicated Googlers, we are building tools that allow the cultural sector to display more of its diverse heritage online, making it accessible to all.
Here you can find artworks, landmarks and world heritage sites, as well as digital exhibitions that tell the stories behind the archives of cultural institutions across the globe.”
Source 1: http://www.google.com/intl/en/culturalinstitute/about/
Source 2: D-Day
Interesting concept: By allowing you to play sounds of coffee shops and lounges – the chattering and mumbling of people in the background – this website tries to boost creativity.
The wikipedia tells about JSON:
Unfortunately complex JSON can get a bit heavy on the structure itself with over and over repetitions of data-schemes and ids.
There’s RJSON to the rescue on this. It’s backwards compatible and makes your JSON more compressible:
“RJSON converts any JSON data collection into more compact recursive form. Compressed data is still JSON and can be parsed with
JSON.parse. RJSON can compress not only homogeneous collections, but also any data sets with free structure.
RJSON is single-pass stream compressor, it extracts data schemes from document, assign each schema unique number and use this number instead of repeating same property names again and again.”
Of course this is all open-source and you can get your hands dirty here.
While I am using Xcode a lot lately I quickly got used to one or two keyboard shortcuts that come in handy once every while. This cheat sheet aims at bringing you a lot of shortcuts that are pretty hard to remember if you’re not using them all the time (at least for me).
After getting the server and the iOS client application to the people I’ve sat down and started doing something I have not done yet – writing a web application with no server side except a standard HTTP server.
Here’s a little demonstration which I will explain in more detail below:
The default Miataru service can be accessed through the client application with this URL: http://miataru.com/client – This will open a new browser window with a completely fresh session of the application. Since Miataru is all about control of your own data this webapplication does not store anything on any servers – every access to the internet is read-only and only to the Miataru service (just “GetLocation”). Oh – and by default it uses SSL to encrypt all traffic from and to the Miataru service.
You can start by entering DeviceIDs you know or you can – for test purposes – use a DeviceID I am providing for test purposes: BF0160F5-4138-402C-A5F0-DEB1AA1F4216
Of course, the easiest way is to just embedd the DeviceID into the URL, just like this: http://miataru.com/client/#BF0160F5-4138-402C-A5F0-DEB1AA1F4216
Oh and if you want to see the device moving on your iPhone just use the miataru iOS client and scan this QR code here:
So that was easy – but if the application does not store anything on any server, how does it maintain the Known Devices list between browser sessions (open/closes of the browser) you ask? – It’s using HTML5 WebStorage to store these information locally in your browser. This has the advantage of being completely local, but also the disadvantage that it is not shared between browsers or machines.
Like usual this whole application is also available completely free of charge and open-sourced to be used, edited and installed on-premise if you like.
Let me know how you like it!
Source 1: http://miataru.com/client
Source 2: http://miataru.com/client/#BF0160F5-4138-402C-A5F0-DEB1AA1F4216
Source 3: https://github.com/miataru/miataru-webclient
Source 4: http://www.w3schools.com/html/html5_webstorage.asp
“A Dark Pattern is a type of user interface that appears to have been carefully crafted to trick users into doing things, such as buying insurance with their purchase or signing up for recurring bills.”
Having fun with hardware is a good way to learn about the machines which soon will become our new overlords. With this pretty interesting presentation you can dive deep into what a CPU does and how it can be exploited to run code by not running it.
“Trust Analysis, i.e. determining that a system will not execute some class of computations, typically assumes that all computation is captured by an instruction trace. We show that powerful computation on x86 processors is possible without executing any CPU instructions. We demonstrate a Turing-complete execution environment driven solely by the IA32 architecture’s interrupt handling and memory translation tables, in which the processor is trapped in a series of page faults and double faults, without ever successfully dispatching any instructions. The “hard-wired” logic of handling these faults is used to perform arithmetic and logic primitives, as well as memory reads and writes. This mechanism can also perform branches and loops if the memory is set up and mapped just right. We discuss the lessons of this execution model for future trustworthy architectures.”
Since I’ve become sort of an iOS developer lately I had my fair share of WWDC recordings to get started with this whole CocoaTouch and Objective-C development stuff.
Now a tool that is pretty handy is a this website that offers a full-text transcript search of all WWDC recordings. Awesome!
SDR – or Software Defined Radio is relatively cheap and fun way to dive deeper into radio communication.
“Software-defined radio (SDR) is a radio communication system where components that have been typically implemented in hardware (e.g. mixers, filters, amplifiers, modulators/demodulators, detectors, etc.) are instead implemented by means of software on a personal computer or embedded system. While the concept of SDR is not new, the rapidly evolving capabilities of digital electronics render practical many processes which used to be only theoretically possible.” (Wikipedia)
So with cheap hardware it’s possible to receive radio transmissions on all sorts of frequencies and modulations. Since everything after the actual “receiving stuff”-phase happens in software the things you can do are sort of limitless.
Now what about the relatively cheap factor? – The hardware you’re going to need to start with this is a DVB-T USB stick widely available for about 25 Euro. The important feature you’re going to look for is that it comes with a Realtek RTL2832U chip.
“The RTL2832U is a high-performance DVB-T COFDM demodulator that supports a USB 2.0 interface. The RTL2832U complies with NorDig Unified 1.0.3, D-Book 5.0, and EN300 744 (ETSI Specification). It supports 2K or 8K mode with 6, 7, and 8MHz bandwidth. Modulation parameters, e.g., code rate, and guard interval, are automatically detected.
The RTL2832U supports tuners at IF (Intermediate Frequency, 36.125MHz), low-IF (4.57MHz), or Zero-IF output using a 28.8MHz crystal, and includes FM/DAB/DAB+ Radio Support. Embedded with an advanced ADC (Analog-to-Digital Converter), the RTL2832U features high stability in portable reception.” (RealTek)
You’ll find this chip in all sorts of cheap DVB-T USB sticks like this one:
To use the hardware directly you can use open source software which comes pre-packaged with several important/widely used demodulator moduls like AM/FM. Gqrx SDR is available for all sorts of operating systems and comes with a nice user interface to control your SDR hardware.
The neat idea about SDR is that you, depending on the capabilities of your SDR hardware, are not only tuned into one specific frequency but a whole spectrum several Mhz wide. With my device I get roughly a full 2 Mhz wide spectrum out of the device allowing me to see several FM stations on one spectrum diagram and tune into them individually using the demodulators:
The above screenshot shows the OS X version of Gqrx tuned into an FM station. You can clearly see the 3 stations that I can receive in that Mhz range. One very strong signal, one very weak and one sort of in the middle. By just clicking there the SDR tool decodes this portion of the data stream / spectrum and you can listen to a FM radio station.
Of course – since those DVB-T sticks come with a wide spectrum useable – mine comes with an Elonics E4000 tuner which allows me to receive – more or less useable – 53 Mhz to 2188 Mhz (with a gap from 1095 to 1248 Mhz).
Whatever your hardware can do can be tested by using the rtl_test tool:
root@berry:~# rtl_test -t
Found 1 device(s):
0: Terratec T Stick PLUS
Using device 0: Terratec T Stick PLUS
Found Elonics E4000 tuner
Supported gain values (14): -1.0 1.5 4.0 6.5 9.0 11.5 14.0 16.5 19.0 21.5 24.0 29.0 34.0 42.0
Benchmarking E4000 PLL…
[E4K] PLL not locked for 52000000 Hz!
[E4K] PLL not locked for 2189000000 Hz!
[E4K] PLL not locked for 1095000000 Hz!
[E4K] PLL not locked for 1248000000 Hz!
E4K range: 53 to 2188 MHz
E4K L-band gap: 1095 to 1248 MHz
Interestingly when you plug the USB stick into an Raspberry Pi and you follow some instructions you can use the Raspberry Pi as an SDR server allowing you to place it on the attic while still sitting comfortably at your computer downstairs to have better reception.
If you want to upgrade your experience with more professional hardware – and in fact if you got a sender license – you can take a look at the HackRF project which currently is creating a highly sophisticated SDR hardware+software solution:
Source 1: http://www.realtek.com.tw/products/productsView.aspx?Langid=1&PFid=35&Level=4&Conn=3&ProdID=257
Source 2: http://gqrx.dk/
Source 3: www.hamradioscience.com/raspberry-pi-as-remote-server-for-rtl2832u-sdr/
Source 4: http://ossmann.blogspot.de/2012/06/introducing-hackrf.html
Source 5: https://github.com/mossmann/hackrf
If you want to interface with the publicly available instance of the miataru server you can use the URL: http://service.miataru.com. This URL also is pre-configured with the iOS client that got recently available in the AppStore.
I started working on a Node.js project and so far it’s a quite satisfying experience. But what is Node.js?
There are a lot of things that are approached differently in Node. One of which is how you deal with code and debuggings.
I come from a world of fully integrated development environments. I had that for C#, it’s there for Java, it’s even there for Objective-C.
So it’s a bit like a toolbox you are supposed to put together yourself. And in this article I want to describe how a 2-week-beginners development environment for Node looks like. If you got anything to improve or add – go ahead, leave a comment!
GIT! I am using GitX and command line git to work with the source control. Nothing special really.
You got a lot of options here. May it be the awesome Sublime Text 2 or Eclipse or NetBeans. I chose Coda 2 since I already got it and was using it for my humble web development intermezzos. It’s awesome and if you’re on Mac you should give it a try!
Now things are getting interesting. To debug Node.js applications you have a lot of options from which a lot of them works quite good. Unfortunately I was not able to find the one IDE that provides all in one – great code editing and good debugging. So I chose to use a stand-alone debugging solution that does the trick in the best way I can think of. It’s called node-inspector and is available on all possible platforms as it seems.
Triggering and Glue
There’s only one thing left right now which is hindering the code hacking and debugging. And it’s the fact that Node.js in it’s default state does not reload changed local code files after it loaded them once. And this means: When you edit something you would have to manually restart Node.js to see the changes you just made in effect. And that’s where a little tool called Supervisor comes into play. It watches the files of your project and kills+restarts Node.js automatically for you and takes care of that bugging restart-cycle. It just works!
If course there are some more things in regards of writing tests. But this is going to be another article.
Source 1: http://nodejs.org
Source 2: http://en.wikipedia.org/wiki/Node.js
Source 3: http://panic.com/coda/
Source 4: https://github.com/node-inspector/node-inspector
Source 5: https://github.com/isaacs/node-supervisor
I am a long time user of Google Latitude – and since I added a Google Latitude Module to h.a.c.s. almost the whole family started using this service. It’s all about tracking your location.
“Latitude has been retired
Google Latitude was retired on August 9th, 2013. Products retired include Google Latitude in Google Maps for Android, Latitude for iPhone, the Latitude API, the public badge, the iGoogle Gadget, and the Latitude website at maps.google.com/latitude.
What does this mean for me?
- You are no longer able to share your location using Latitude….”
We used it for a lot of use cases. If just to know if the other is en-route to a meeting-point or to know if someone arrived safely during a long trip. Or in terms of home automation to let the house know if you are there or somewhere else – for instance to enable or disable the house alarming system or to power up / shutdown the heating if necessary.
After the retirement of Latitude on the 9th of August all those use cases where not doable anymore. Yes there are some tools that do this and that for Location tracking. But when Google Latitude was still active it did not fullfil all use-cases I would have gotten – it was just “good enough”. Now all those subtitutes are not even close to a fraction of the use cases I would have.
Now what? Easy! If nothing works out, you gotta do it yourself!
So I started a new spare-time project I call Miataru. Weird name, eh?
“Miataru or 見当たる is Japanese and means “be found” or “to come across” and it’s meant to be a set of tools to allow the user to track locations and choose how to work with the data as well as how data is stored if at all.” (Miataru.com)
So – this should not be a replacement for anyone for Google Latitude. But the goal is to create a client+server toolset that allows you to cover a lot of use cases around location tracking and the interfacing with other software like home automation.
So expect some articles here about all the funny things and learnings about NodeJS and Objective-C / iOS development.
Some quick words to all you readers:
If you want to participate in an open source project in NodeJS and Mobile devices you’re invited to join anytime!
I’ve added Alarming to hacs a while ago and I’ve now extended the built-in SMS gateway providers with the german telekom services called “Global SMS API”.
This API is offered through the Telekom own portal called developer garden and is as easy to use as it can possibly be. You only need to set-up the account with developergarden and after less than 5 minutes you can send and receive SMS and do a lot more. They got APIs for nearly everything you possible want to do … fancy some “talk to your house”-action? Would be easy to integrate into h.a.c.s. using their Speech2Text APIs.
They have a short video showing how to set it all up:http://www.youtube.com/watch?v=caRSafzMDK0
So I’ve added the SMS-send capabilities to the hacs internal alarming system with it’s own JSON configuration file looking like this:
And this simple piece of configuration leads to SMS getting sent out as soon as – in this example – a window opens:
Before the Telekom Global SMS API I’ve used a different provider (SMS77) but since the delivery times of this provider varied like crazy (everything from 30 seconds to 5 minutes) and the provider had a lot of downtimes my thought was to give the market leader a try.
So now here it is – integrated. Get the source here.
Most development projects are relying on a source code repository these days to have control over the constant changes by many team members. The source code repository is therefore the complete history of a software project.
With the great tool called “gource” you can visualize it! This for example is the project a team from Rakuten Germany (where I work) worked on for the last couple of months:
Obviously it’s impossible for Apple to fix that quite annoying bug in their operating system that leads to double/tripple/… program entries in the “Open with…” menu. Everytime an application is updated it adds a new entry but does not remove the old one.
This makes your open-width menu look like this:
/System/Library/Frameworks/CoreServices.framework/Frameworks/LaunchServices.framework/Support/lsregister -kill -r -domain local -domain system -domain user;killall Finder
This simple command will kill the double/tripple/… entries and restarts your finder.app to make the change visible. Your “Open with…” menu should now only show singular entries per application:
“Hyper-lapse photography – a technique combining time-lapse and sweeping camera movements typically focused on a point-of-interest – has been a growing trend on video sites. It’s not hard to find stunning examples on Vimeo. Creating them requires precision and many hours stitching together photos taken from carefully mapped locations. We aimed at making the process simpler by using Google Street View as an aid, but quickly discovered that it could be used as the source material. It worked so well, we decided to design a very usable UI around our engine and release Google Street View Hyperlapse.“
Die verfügbaren IPv4 Adressen neigen sich dem Ende und IPv6 wird kommen. Da gibt es keinen Zweifel! Dieses Weblog beispielsweise ist seit über zwei Jahren nativ über IPv6 erreichbar. Nun wird es mit jedem Monat der ins Land geht immer ‘brenzliger’ und dementsprechend wichtig ist der Schritt unter anderem auch für die öffentliche Verwaltung. Interessante Einblicke gibt dieses umfangreiche Dokument:
“Seit den Anfangstagen des Internets wird zur Übertragung der Daten das Internet Protokoll in der Version 4 (IPv4) verwendet. Heute wird dieses Protokoll überall verwendet auch in den internen Netzen von Behörden und Organisationen. Das Internet und alle Netze, welche IPv4 heute verwenden, stehen vor einem tiefgreifenden technischen Wandel, denn es ist zwingend für alle zum Nachfolger IPv6 zu wechseln.
Auf die oft gestellte Frage, welche wesentlichen Faktoren eine Migration zu IPv6 vorantreiben, gibt es zwei zentrale Antworten:
- Es gibt einen Migrationszwang der auf die jetzt schon (in Asien) nicht mehr verfügbaren IPv4-Adressen zurückführen ist.
- Mit dem steigenden Adressbedarf für alle Klein- und Großgeräte, vom Sensor über Smartphones bis zur Waschmaschine, die über IP-Netze kommunizieren müssen verschärft sich das Problem der zur Neige gegangenen IPv4-Adressräume. Das Zusammenkommen beider Faktoren beschleunigt den Antrieb zur IPv6-Migration.
Es wird in Zukunft viele Geräte geben, die nur noch über eine IPv6-Adresse anstatt einer IPv4-Adresse verfügen werden und nur über diese erreichbar sind. Schon heute ist bei den aktuellsten Betriebssystemversionen IPv6 nicht mehr ohne Einschränkungen deaktivierbar. Restliche IPv4-Adressen wird man bei Providern gegen entsprechende Gebühren noch mieten können. Bei einem Providerwechsel im Kontext einer Neuausschreibung von Dienstleistungen wird man diese jedoch nicht mehr ‘mitnehmen’ können. Damit bedeutet eine Migration zu IPv6 nicht nur die garantierte Verfügbarkeit ausreichend vieler IP-Adressen, sondern stellt auch die Erreichbarkeit eigener Dienstleistungen für die Zukunft sicher ohne von einem Anbieter abhängig zu sein.”
Last year in June I wrote about the concept of a ubiquitous status display of the business in every office. Especially for development and operations it’s pretty important to have important measurements, status codes and project information in front of them all the time.
Back then I already wrote about the Panic status board which gives a great looking example of a status display. Now there is a software from the company Panic which offers anyone the ability to create such a status board. It’s for iOS and looks awesome!
The number is 27!
Right now there are 27 different missions ongoing to explore our solar system. A high number for something that is not part of our daily news cycle. Those missions currently concentrate on the sun, mars, mercury, venus, the earth moon and some asteriods.
Did you ever start a horde of virtual machines and a complicated vm-only network set-up just to simulate a medium complex network and the interaction of nodes in that network? Well that’s a tiresome, error-prone and labour intensive process. Fear no more, there’s a tool to the rescue.
“Mininet creates a realistic virtual network, running real kernel, switch and application code, on a single machine (VM, cloud or native), in seconds, with a single command:”
“Because you can easily interact with your network using the Mininet CLI (and API), customize it, share it with others, or deploy it on real hardware, Mininet is useful for development, teaching, and research. Mininet is also a great way to develop, share, and experiment with OpenFlow and Software-Defined Networking systems.
For just shy of 2 years I am a fan of whisky. After I got the hang of the processes, tastes and smells around this spirit I started collecting them – collecting to drink them eventually.
Now there are a number of shops you can buy good quality whisky from anywhere in the world. One of which happens to be located in germany. This shop is not only offering a huge choice but also a cross-sellers dream: tasting and explanation videos beneath many of the whiskys in which a very talented Mr. Horst Lüning tastes and explains all things whisky.
Now this shop hosts all videos on YouTube. Since I am a big fan of podcasting and internet based entertainment it’s a great thing that because if my little tool called “YouTubeFeast” all new episodes and tasting videos get downloaded automatically. Till today this way I’ve got well over 650 whisky tasting and explanation videos downloaded.
As a matter of fact this is a really entertaining and educating series I even would pay to get access to. But that aside every video which got automatically downloaded usually looks like this (german audio):
As you can see there’s a short intro (8 seconds) and an outro (29 seconds) which every single video starts and ends with. Under normal circumstances there are two occasions when I have those videos played.
- When I want to look for a particular whisky and get an overview of how it’s going to be like.
- For over 12 years I happen to have a “nights playlist” – a playlist of things that are played back during the night – every night. For this it’s important that it’s mainly speech, very normalized audio and of course it needs to be interesting.
So for the second reason it’s important that there are not too many audio bumps and breaks. Unfortunately as much as I like the intro and outro music it’s actually very bass heavy and as such sleep interrupting sometimes… So just like when a good newmake spirit is distilled the start and end run need to be separated by the heart that makes up the spirit.
Every 4-6 months I take all newly added videos and cut them down and add them to the nights playlist folders. The process is like this:
- Rename them: Remove the following things from the filenames
“Whiskey Verkostung – “, “Whiskey Likör Verkostung “, “Whiskygläser “, “Whisky-Verkostung – “, “Whisky Vorstellung “, “Whisky Verkostung “, “Whisky Verkosten “, “Whisky Tasting – “, “Whisky Tasting “, “Whisky Likör Verkostung “, “Whiskey-Verkostung – “, “Whiskey Verkostung “, “Whiskey Tasting – “, “- ”
To rename the files I am usually using the freeware tool Rename Master – it’s awesome!
- Cut the intro away.
This best done with a simple ffmpeg command:
ffmpeg -i $inputfile -ss 00:00:08.0 -acodec copy -vcodec copy $output
- Cut the outro away.
Using a little shell script it’s fairly easy to first get the full length of each video file and then using another tool to substract 29 seconds from each length and cut the heart out until that length is reached.
To get the length the following short line is doing a great job:
ffmpeg -i “$1” 2>&1 |grep Duration | cut -d ‘ ‘ -f 4 | sed s/,//
In order to then cut the video before the outro starts it basically is a another call to ffmpeg:
ffmpeg -i $infile -t $calculatedlength -acodec copy -vcodec copy $output
That way you get just the tasting videos without intro and outro – ready to be enjoyed. For the end of this article I want to stress the fact how awesome I think those whisky videos from Mr. Lüning are. It’s awesome to watch and learn. I hope that those videos will be available for more years to come! Cheers!
Ever wondered what earth looks live from orbit? Well there are several cameras in the ISS which stream (when ISS is over daylight-territory) an live image to earth for you to see.
If you want to know where the International Space Station currently is you can always click here.
It’s becoming a fashion lately to release the source code of older but legendary commercial products to the public. Now Adobe decided to gift the source code of their flagship product Photoshop in it’s first version from 1990 to the Computer History Museum.
“That first version of Photoshop was written primarily in Pascal for the Apple Macintosh, with some machine language for the underlying Motorola 68000 microprocessor where execution efficiency was important. It wasn’t the effort of a huge team. Thomas said, “For version 1, I was the only engineer, and for version 2, we had two engineers.” While Thomas worked on the base application program, John wrote many of the image-processing plug-ins.”
Since my wife started working as a photographer on a daily basis the daily routine of getting all the pictures off the camera after a long day filled with photo shootings got her bored quickly.
Since we got some RaspberryPis to spare I gave it a try and created a small script which when the Pi gets powered on automatically copies all contents of the attached SD card to the houses storage server. Easy as Pi(e) – so to speak.
So this is now an automated process for a couple of weeks – she comes home, get’s all batteries to their chargers, drops the sd cards into the reader and poweres on the Pi. After it copied everything successfully the Pi sends an eMail with a summary report of what has been done. So far so good – everything is on our backuped storage server then.
Now the problem was that she often does not immediately starts working on the pictures. But she wants to take a closer look without the need to sit in front of a big monitor – like taking a look at her iPad in the kitchen while drinking coffee.
So what we need was a tool that does this:
- take a folder (the automated import folder) and get all images in there, order them by day
- display an overview per day of all pictures taken
- allow to see the fullsized picture if necessary
- work on any mobile or stationary device in the household – preferably html5 responsive design gallery
- it should be fast because commonly over 200 pictures are done per day
- it should be opensource because – well opensource is great – and probably we would need to tweak things a bit
Since I did not find anything near what we had in mind I sat down this afternoon and wrote a tool myself. It’s opensourced and available for you to play with it. Here’s a short description what it does:
It’s pretty fast because it’s not actively resizing the images – instead it’s taking the thumbnail picture from the original jpg file which the camera placed there during storing the picture. It’s got some caching and can be run on any operating system where mono / .net is available – which is probably anything – even the RaspberryPi.
The second edition of the book “Security Engineering” by Ross Anderson is available as a full download. It’s quite a reference and a must-read for anybody with an interest in security (which for example all developers should have).
“When I wrote the first edition, we put the chapters online free after four years and found that this boosted sales of the paper edition. People would find a useful chapter online and then buy the book to have it as a reference. Wiley and I agreed to do the same with the second edition, and now, four years after publication, I am putting all the chapters online for free. Enjoy them – and I hope you’ll buy the paper version to have as a conveient shelf reference.”
Source 1: http://www.cl.cam.ac.uk/~rja14/book.html
Source 1: http://pinterest.com/0x0/webdev/
“A programmer is likely to get just one uninterrupted 2-hour session in a day” is one of the statements this great blog article makes on the matter of interruption of professionals while they do their hard work.
It’s an important thing to understand how that idea to code conversion thing happens. For anyone without that experience: Think of it like being very very concentrated and juggling things. When you get abstracted it’s very likely that you drop something. In the worst case you never even get something to juggle…
“Say it with pictures. Describe your feelings about your everyday sysadmin interactions.”
Everybody knows ZIP files. It’s what comes out when you compress something on windows and on OS X. It’s the commonly used format to store and exchange compressed data.
Now there’s a lot of things you can do when you know file formats, especially those with many algorithms involved, inside out. There is a lot of text explaining the ZIP file format, like this one.
The report for 2012 is in! Since 2008 Jehiah Czebotar is monitoring his daily life and he is compiling a report from that data for everyone to read. He self says that this is a hat tip to Nicholas Felton who himself is releasing beautiful yearly reports of statistics around his life.
I am a fan of those nice graphics and statistics about the life. It really gives you insights that you wouldn’t be able to get otherwise. Especially with my own home automation and self-monitoring ambitions it’s quite a load of new ideas coming in from these nice graphics.
If you need data to fill your brand new (graph) database, go ahead, there’s something to load:
“KONECT (the Koblenz Network Collection) is a project to collect large network datasets of all types in order to perform research in network science and related fields, collected by the Institute of Web Science and Technologies at the University of Koblenz–Landau. KONECT contains over a hundred network datasets of various types, including directed, undirected, bipartite, weighted, unweighted, signed and rating networks. The networks of KONECT are collected from many diverse areas such as social networks, hyperlink networks, authorship networks, physical networks, interaction networks and communication networks. The KONECT project has developed network analysis tools which are used to compute network statistics, to draw plots and to implement various link prediction algorithms. The result of these analyses are presented on these pages. Whenever we are allowed to do so, we provide a download of the networks.”
KONECT currently holds 157 networks, of which
Source 1: http://konect.uni-koblenz.de/
If you can stand a little bit of cursing and bad words and if you’re a developer. You should give this site a visit. The commit logs from last night speak for themselves:
It’s been a habbit to ID software to release the source code of their previous games and game engines as open source when time is due. That’s what happened with Doom 3 as well. Since beautiful code appears to a lot of developers it’s just a logical step to analyse the Doom 3 source code with the beauty-aspects in mind.
Now there are two very good examples of such analysis.
Source 1: http://kotaku.com/5975610/the-exceptional-beauty-of-doom-3s-source-code
Source 2: ftp://ftp.idsoftware.com/idstuff/doom3/source/CodeStyleConventions.doc
Source 3: http://fabiensanglard.net/doom3/index.php
Source 4: https://github.com/TTimo/doom3.gpl
Usually the actors that allow you to switch power on/off and who measure power usage use the 434Mhz or 868Mhz wireless bands to communicate with their base station. Now the german manufacturer AVM came up with a solution that allows you to switch on/off (with an actual button on the device itself and wireless!) and to measure the power consumption of the devices connected to it.
The unspectacular it looks the spectacular are the features:
- switch up to 2300 watts / 10 ampere
- use different predefined settings to switch on/off or even use Google Calendar to tell it when to switch
- measure the energy consumption of connected devices
- it uses the european DECT standard to communicate with a Fritz!Box base station (which is a requirement)
For around 50 Euro it’s quite an investment but maybe I’ll give it a shot – especially the measurement functionality sounds great. Since I do not have one yet I don’t know anything about how to access it through third party software (h.a.c.s.?)
Source 1: www.avm.de/de/News/artikel/2013/start_fritz_dect_200.html
Source 2: www.avm.de/de/Produkte/Smart_Home/FRITZDECT_200/index.php
Source 3: en.wikipedia.org/wiki/Digital_Enhanced_Cordless_Telecommunications
Behold the beauty of the earth by night from orbit:
“0 A.D. (pronounced “zero-ey-dee”) is a free, open-source, historical Real Time Strategy (RTS) game currently under development by Wildfire Games, a global group of volunteer game developers. As the leader of an ancient civilization, you must gather the resources you need to raise a military force and dominate your enemies.”
Source 1: http://play0ad.com/
And once again some smart people put their heads together and came up with something that will revolutionize your world. Well it’s ‘just’ home automation but indeed it looks very very promising. Especially the human-machine interface through speech recognition. First of all let’s start with a short introductory video:
“CastleOS is an integrated software suite for controlling the automation equipment in your home – an operating system for your castle, if you will. The first piece of the suite is what we call the “Core Service” – it acts as the central controller for the whole system. This runs on any relatively recent Windows computer (or more specifically, the computer that has an Insteon PLM or USB stick plugged in to it), and creates a network connection to both your home automation devices, and the second piece of the integrated suite – the remote access apps like the HTML5 app, Kinect voice control app, and future Android/iOS apps.” (from the CastleOS page)
So it’s said to be an all-in-one system that controls power-outlets and devices through it’s core service and offering the option to add Kinect based speech recognition to say things like “Computer, Lights!”.
Unfortunately it comes with quite high and hard requirements when it comes to hardware it’s compatible with. A kinect possible exists in your household but I doubt that you got the Insteon hardware to control out devices with.
That seems to be the main problem of all current home automation solutions – you just have to have the according hardware to use them. It’s not quite possible to use anything and everything in a standardized way. Maybe it’s time to have a “home plug’n’play” specification set-up for all hard- and software vendors to follow?
Source 1: http://www.castleos.com
I plan to speak at a couple of conferences this year – first in the line will be the Open Source Data Center conference in Nuremberg.
“The Open Source Data Center Conference, with a changing focus from year to year, offers you the unique possibility to meet international OS-experts, to benefit from their comprehensive experience and to gain the latest know-how for the daily practice. The conference is especially adapted to experienced administrators and architects.”
The topic I will be talking about (in german though) is our fully virtualized data center testing environment at Rakuten Germany.
When you want to change things from “testing in production” to “testing in a test environment” it’s usually a very hard way to go. In this case we chose the way to virtualize whatever service was in the datacenter, with all the same configurations and even network settings. We called that “Ignition” and it allows us to test almost any aspect of our production environment without interfereing with it. My talk will cover the thoughts and technologies behind that.
I also want to stress the fact that there are a lot more interesting talks than mine. Go to the OSDC 2013 homepage and find out for yourself.
Source 1: http://www.netways.de/osdc
So first a small video to get an idea what I am implementing right now:
- to draw svg in a human-controllable
- to draw those nice gauges
- an animated HTML5 canvas odometer
I plan to add a lot more – like for swiping gestures. So this will be – just like h.a.c.s – a continuous project. Since I switched to OS X entirely at home I use the great Coda2 to write and debug the code. It helps a lot to have two browser set-up because for some reason I still not feel that well with the WebKit Web Inspector.
Another great feature of Coda2 is the AirPreview – which means it will preview your current page in the editor on an iOS device running DietCoda – oh how I love those automations.
Every year between Christmas and New Years the hackers of the (mostly european) world gather for the Chaos Communication Congress. This year for the 29th time. The 29c3 takes place where it all started – in Hamburg. This years subtitle is:
Since the reports are already in that the fairydust has landed successfully in Hamburg there’s even a proof picture for it:
Since FeM is already preparing it will be great to ‘attend’ the congress via live streams of all lectures.
Source 1: https://events.ccc.de/congress/2012/wiki/Main_Page
Source 2: http://blog.fem.tu-ilmenau.de/archives/836-Reisetagebuch-Mal-kurz-Hamburg.html
The other day I found out that there is an actual hackerspace in Bamberg – the city where I work and live nearby. For some strange reason it never occurred to me to search for an hackerspace nearby. But now since the 29c3 is at the gates I found them on the “Congress everywhere” pages (beware, it’s having a hard time right now).
Since I just found it and christmas duties take their toll I wasn’t able to go by and talk to the people there in person – i’ve just contacted them over their IRC channel (#backspace on freenode). Eventually I will have time to visit them and I’ll have a report up here then.
For the time being enjoy their website and the projects they already did. Apparently there are some very interesting LED lighting experiments.
Wikipedia describes latency this way:
“Latency is a measure of time delay experienced in a system, the precise definition of which depends on the system and the time being measured. In communications, the lower limit of latency is determined by the medium being used for communications. In reliable two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is “in-flight” at any one moment. In the field of human-machine interaction, perceptible latency has a strong effect on user satisfaction and usability.” (Wikipedia)
Given that it’s quite important for any developer to know his numbers. Since latency has a huge impact on how software should be architected it’s important to keep that in mind: