IPv6 Migrationsleitfaden für die öffentliche Verwaltung

Die verfügbaren IPv4 Adressen neigen sich dem Ende und IPv6 wird kommen. Da gibt es keinen Zweifel! Dieses Weblog beispielsweise ist seit über zwei Jahren nativ über IPv6 erreichbar. Nun wird es mit jedem Monat der ins Land geht immer ‘brenzliger’ und dementsprechend wichtig ist der Schritt unter anderem auch für die öffentliche Verwaltung. Interessante Einblicke gibt dieses umfangreiche Dokument:

Bildschirmfoto 2013-05-04 um 20.15.28
downloadbares 270 Seiten PDF

“Seit den Anfangstagen des Internets wird zur Übertragung der Daten das Internet Protokoll in der Version 4 (IPv4) verwendet. Heute wird dieses Protokoll überall verwendet auch in den internen Netzen von Behörden und Organisationen. Das Internet und alle Netze, welche IPv4 heute verwenden, stehen vor einem tiefgreifenden technischen Wandel, denn es ist zwingend für alle zum Nachfolger IPv6 zu wechseln.

Auf die oft gestellte Frage, welche wesentlichen Faktoren eine Migration zu IPv6 vorantreiben, gibt es zwei zentrale Antworten:

  • Es gibt einen Migrationszwang der auf die jetzt schon (in Asien) nicht mehr verfügbaren IPv4-Adressen zurückführen ist.
  • Mit dem steigenden Adressbedarf für alle Klein- und Großgeräte, vom Sensor über Smartphones bis zur Waschmaschine, die über IP-Netze kommunizieren müssen verschärft sich das Problem der zur Neige gegangenen IPv4-Adressräume. Das Zusammenkommen beider Faktoren beschleunigt den Antrieb zur IPv6-Migration.

Es wird in Zukunft viele Geräte geben, die nur noch über eine IPv6-Adresse anstatt einer IPv4-Adresse verfügen werden und nur über diese erreichbar sind. Schon heute ist bei den aktuellsten Betriebssystemversionen IPv6 nicht mehr ohne Einschränkungen deaktivierbar. Restliche IPv4-Adressen wird man bei Providern gegen entsprechende Gebühren noch mieten können. Bei einem Providerwechsel im Kontext einer Neuausschreibung von Dienstleistungen wird man diese jedoch nicht mehr ‘mitnehmen’ können. Damit bedeutet eine Migration zu IPv6 nicht nur die garantierte Verfügbarkeit ausreichend vieler IP-Adressen, sondern stellt auch die Erreichbarkeit eigener Dienstleistungen für die Zukunft sicher ohne von einem Anbieter abhängig zu sein.”

Source 1: IPv6 Migrationsleitfaden für die öffentliche Verwaltung
Source 2: IPv6-Best Practice für die öffentliche Verwaltung

the Panic Status Board is here!

Last year in June I wrote about the concept of a ubiquitous status display of the business in every office. Especially for development and operations it’s pretty important to have important measurements, status codes and project information in front of them all the time.

Back then I already wrote about the Panic status board which gives a great looking example of a status display. Now there is a software from the company Panic which offers anyone the ability to create such a status board. It’s for iOS and looks awesome!

Bildschirmfoto 2013-05-04 um 19.56.56

Source 1: Mirror, Mirror on the wallSource 2: http://panic.com/statusboard/

How many space missions are exploring our solar system right now?

The number is 27!

20130425_solar-system-missions2013-05_big
CC-BY-SA Olaf Frohn

Right now there are 27 different missions ongoing to explore our solar system. A high number for something that is not part of our daily news cycle. Those missions currently concentrate on the sun, mars, mercury, venus, the earth moon and some asteriods.

Source: http://www.raumfahrer.net/news/raumfahrt/01052013213936.shtml

a virtual network inside your machine

Did you ever start a horde of virtual machines and a complicated vm-only network set-up just to simulate a medium complex network and the interaction of nodes in that network? Well that’s a tiresome, error-prone and labour intensive process. Fear no more, there’s a tool to the rescue.

“Mininet creates a realistic virtual network, running real kernel, switch and application code, on a single machine (VM, cloud or native), in seconds, with a single command:”

frontpage_diagram

“Because you can easily interact with your network using the Mininet CLI (and API), customize it, share it with others, or deploy it on real hardware, Mininet is useful for development, teaching, and research. Mininet is also a great way to develop, share, and experiment with OpenFlow and Software-Defined Networking systems.

Mininet is actively developed and supported, and is released under a permissive BSD Open Source license. We encourage contribution of code, bug reports/fixes, documentation, and anything else that can improve the system!”

Source: http://mininet.github.com/

A lot of Whisky videos – and a tutorial how to cut videos on the command line

For just shy of 2 years I am a fan of whisky. After I got the hang of the processes, tastes and smells around this spirit I started collecting them – collecting to drink them eventually.

Now there are a number of shops you can buy good quality whisky from anywhere in the world. One of which happens to be located in germany. This shop is not only offering a huge choice but also a cross-sellers dream: tasting and explanation videos beneath many of the whiskys in which a very talented Mr. Horst Lüning tastes and explains all things whisky.

Now this shop hosts all videos on YouTube. Since I am a big fan of podcasting and internet based entertainment it’s a great thing that because if my little tool called “YouTubeFeast” all new episodes and tasting videos get downloaded automatically. Till today this way I’ve got well over 650 whisky tasting and explanation videos downloaded.

Bildschirmfoto 2013-03-01 um 20.46.32

As a matter of fact this is a really entertaining and educating series I even would pay to get access to. But that aside every video which got automatically downloaded usually looks like this (german audio):

As you can see there’s a short intro (8 seconds) and an outro (29 seconds) which every single video starts and ends with. Under normal circumstances there are two occasions when I have those videos played.

  1. When I want to look for a particular whisky and get an overview of how it’s going to be like.
  2. For over 12 years I happen to have a “nights playlist” – a playlist of things that are played back during the night – every night. For this it’s important that it’s mainly speech, very normalized audio and of course it needs to be interesting.

So for the second reason it’s important that there are not too many audio bumps and breaks. Unfortunately as much as I like the intro and outro music it’s actually very bass heavy and as such sleep interrupting sometimes… So just like when a good newmake spirit is distilled the start and end run need to be separated by the heart that makes up the spirit.

Every 4-6 months I take all newly added videos and cut them down and add them to the nights playlist folders. The process is like this:

  1. Rename them: Remove the following things from the filenames
    “Whiskey Verkostung – “, “Whiskey Likör Verkostung “, “Whiskygläser “, “Whisky-Verkostung – “, “Whisky Vorstellung “, “Whisky Verkostung “, “Whisky Verkosten “, “Whisky Tasting – “, “Whisky Tasting “, “Whisky Likör Verkostung “, “Whiskey-Verkostung – “, “Whiskey Verkostung “, “Whiskey Tasting – “, “- ”

    To rename the files I am usually using the freeware tool Rename Master – it’s awesome!

  2. Cut the intro away.
    This best done with a simple ffmpeg command:

    ffmpeg -i $inputfile -ss 00:00:08.0 -acodec copy -vcodec copy $output

  3. Cut the outro away.
    Using a little shell script it’s fairly easy to first get the full length of each video file and then using another tool to substract 29 seconds from each length and cut the heart out until that length is reached.

    To get the length the following short line is doing a great job:

    ffmpeg -i “$1” 2>&1 |grep Duration | cut -d ‘ ‘ -f 4 | sed s/,//

    In order to then cut the video before the outro starts it basically is a another call to ffmpeg:

    ffmpeg -i $infile -t $calculatedlength -acodec copy -vcodec copy $output

That way you get just the tasting videos without intro and outro – ready to be enjoyed. For the end of this article I want to stress the fact how awesome I think those whisky videos from Mr. Lüning are. It’s awesome to watch and learn. I hope that those videos will be available for more years to come! Cheers!

Source 1: http://www.joejoesoft.com/vcms/108/
Source 2: http://www.whisky.de

Adobe Photoshop version 1 source code

It’s becoming a fashion lately to release the source code of older but legendary commercial products to the public. Now Adobe decided to gift the source code of their flagship product Photoshop in it’s first version from 1990 to the Computer History Museum.

splashscreen

“That first version of Photoshop was written primarily in Pascal for the Apple Macintosh, with some machine language for the underlying Motorola 68000 microprocessor where execution efficiency was important. It wasn’t the effort of a huge team. Thomas said, “For version 1, I was the only engineer, and for version 2, we had two engineers.” While Thomas worked on the base application program, John wrote many of the image-processing plug-ins.”

Source: http://www.computerhistory.org/atchm/adobe-photoshop-source-code/

Automated Picture Tank and Gallery for a photographer

Since my wife started working as a photographer on a daily basis the daily routine of getting all the pictures off the camera after a long day filled with photo shootings got her bored quickly.

Since we got some RaspberryPis to spare I gave it a try and created a small script which when the Pi gets powered on automatically copies all contents of the attached SD card to the houses storage server. Easy as Pi(e) – so to speak.

IMG_2322

So this is now an automated process for a couple of weeks – she comes home, get’s all batteries to their chargers, drops the sd cards into the reader and poweres on the Pi. After it copied everything successfully the Pi sends an eMail with a summary report of what has been done. So far so good – everything is on our backuped storage server then.

Now the problem was that she often does not immediately starts working on the pictures. But she wants to take a closer look without the need to sit in front of a big monitor – like taking a look at her iPad in the kitchen while drinking coffee.

So what we need was a tool that does this:

  • take a folder (the automated import folder) and get all images in there, order them by day
  • display an overview per day of all pictures taken
  • allow to see the fullsized picture if necessary
  • work on any mobile or stationary device in the household – preferably html5 responsive design gallery
  • it should be fast because commonly over 200 pictures are done per day
  • it should be opensource because – well opensource is great – and probably we would need to tweak things a bit

Since I did not find anything near what we had in mind I sat down this afternoon and wrote a tool myself. It’s opensourced and available for you to play with it. Here’s a short description what it does:

It’s called GalleryServer and basically is an embedded http server which takes all .jpg files from a folder (configurable) and offers you some handy tool urls which respons with JSON data for you to work with. I’ve written a very small html user interface with a bit of javascript (using the great html5 kickstart) that allows you to see all available days and get a nice thumbnail overview of each day – when you click on it it opens the full-size image in a new window.

It’s pretty fast because it’s not actively resizing the images – instead it’s taking the thumbnail picture from the original jpg file which the camera placed there during storing the picture. It’s got some caching and can be run on any operating system where mono / .net is available – which is probably anything – even the RaspberryPi.

Source 1: my wifes page
Source 2: 99lime html5 kickstart boilerplate
Source 3: https://github.com/bietiekay/GalleryServer

Security Engineering — The Book

The second edition of the book “Security Engineering” by Ross Anderson is available as a full download. It’s quite a reference and a must-read for anybody with an interest in security (which for example all developers should have).

“When I wrote the first edition, we put the chapters online free after four years and found that this boosted sales of the paper edition. People would find a useful chapter online and then buy the book to have it as a reference. Wiley and I agreed to do the same with the second edition, and now, four years after publication, I am putting all the chapters online for free. Enjoy them – and I hope you’ll buy the paper version to have as a conveient shelf reference.”
book2coversmall

Source 1: http://www.cl.cam.ac.uk/~rja14/book.html

a good source of all things javascript libraries

Choosing the right javascript library is one of the key elements to create a good prototype in very short times – productive applications even. If you want to get new impressions, hints and links to those javascript libraries that will render your next project a success look no further:

Bildschirmfoto 2013-02-02 um 20.50.49

Source 1: http://pinterest.com/0x0/webdev/

You shall not interrupt a programmer

A programmer is likely to get just one uninterrupted 2-hour session in a day” is one of the statements this great blog article makes on the matter of interruption of professionals while they do their hard work.

It’s an important thing to understand how that idea to code conversion thing happens. For anyone without that experience: Think of it like being very very concentrated and juggling things. When you get abstracted it’s very likely that you drop something. In the worst case you never even get something to juggle…

Source 1: http://blog.ninlabs.com/2013/01/programmer-interrupted/

the ZIP file that never ends…

Everybody knows ZIP files. It’s what comes out when you compress something on windows and on OS X. It’s the commonly used format to store and exchange compressed data.

Now there’s a lot of things you can do when you know file formats, especially those with many algorithms involved, inside out. There is a lot of text explaining the ZIP file format, like this one.

With that knowledge it is possible to create a valid ZIP file that never ends. You might already know ZIP bombs bit this one is a different animal. You computer won’t stop decompressing…

Source 1: http://research.swtch.com/zip
Source 2: http://steike.com/code/useless/zip-file-quine/
Source 3: http://en.wikipedia.org/wiki/Zip_bomb

personal annual reports

The report for 2012 is in! Since 2008 Jehiah Czebotar is monitoring his daily life and he is compiling a report from that data for everyone to read. He self says that this is a hat tip to Nicholas Felton who himself is releasing beautiful yearly reports of statistics around his life.

Bildschirmfoto 2013-01-16 um 15.34.15I am a fan of those nice graphics and statistics about the life. It really gives you insights that you wouldn’t be able to get otherwise. Especially with my own home automation and self-monitoring ambitions it’s quite a load of new ideas coming in from these nice graphics.

Source 1: http://jehiah.cz/one-two/
Source 2: http://www.feltron.com/

how about some big data?

If you need data to fill your brand new (graph) database, go ahead, there’s something to load:

“KONECT (the Koblenz Network Collection) is a project to collect large network datasets of all types in order to perform research in network science and related fields, collected by the Institute of Web Science and Technologies at the University of Koblenz–Landau. KONECT contains over a hundred network datasets of various types, including directed, undirected, bipartite, weighted, unweighted, signed and rating networks. The networks of KONECT are collected from many diverse areas such as social networks, hyperlink networks, authorship networks, physical networks, interaction networks and communication networks. The KONECT project has developed network analysis tools which are used to compute network statistics, to draw plots and to implement various link prediction algorithms. The result of these analyses are presented on these pages. Whenever we are allowed to do so, we provide a download of the networks.”

KONECT currently holds 157 networks, of which

  • 36 are undirected,
  • 51 are directed,
  • 70 are bipartite,
  • 68 are unweighted,
  • 72 allow multiple edges,
  • 6 have signed edges,
  • 10 have ratings as edges,
  • 1 allows multiple weighted edges,
  • and 64 have edge arrival times.

Source 1: http://konect.uni-koblenz.de/

new actors to switch power on/off and measure power usage by AVM

Usually the actors that allow you to switch power on/off and who measure power usage use the 434Mhz or 868Mhz wireless bands to communicate with their base station. Now the german manufacturer AVM came up with a solution that allows you to switch on/off (with an actual button on the device itself and wireless!) and to measure the power consumption of the devices connected to it.

The unspectacular it looks the spectacular are the features:

fritz_dect_200_auf_einen_blick

  • switch up to 2300 watts / 10 ampere
  • use different predefined settings to switch on/off or even use Google Calendar to tell it when to switch
  • measure the energy consumption of connected devices
  • it uses the european DECT standard to communicate with a Fritz!Box base station (which is a requirement)

For around 50 Euro it’s quite an investment but maybe I’ll give it a shot – especially the measurement functionality sounds great. Since I do not have one yet I don’t know anything about how to access it through third party software (h.a.c.s.?)

Source 1: www.avm.de/de/News/artikel/2013/start_fritz_dect_200.html
Source 2: www.avm.de/de/Produkte/Smart_Home/FRITZDECT_200/index.php
Source 3: en.wikipedia.org/wiki/Digital_Enhanced_Cordless_Telecommunications

0 A.D. – A free, open-source game

0 A.D. (pronounced “zero-ey-dee”) is a free, open-source, historical Real Time Strategy (RTS) game currently under development by Wildfire Games, a global group of volunteer game developers. As the leader of an ancient civilization, you must gather the resources you need to raise a military force and dominate your enemies.”

Source 1: http://play0ad.com/

my home is my castle – CastleOS: the home automation operating system

And once again some smart people put their heads together and came up with something that will revolutionize your world. Well it’s ‘just’ home automation but indeed it looks very very promising. Especially the human-machine interface through speech recognition. First of all let’s start with a short introductory video:

“CastleOS is an integrated software suite for controlling the automation equipment in your home – an operating system for your castle, if you will. The first piece of the suite is what we call the “Core Service” – it acts as the central controller for the whole system. This runs on any relatively recent Windows computer (or more specifically, the computer that has an Insteon PLM or USB stick plugged in to it), and creates a network connection to both your home automation devices, and the second piece of the integrated suite – the remote access apps like the HTML5 app, Kinect voice control app, and future Android/iOS apps.” (from the CastleOS page)

So it’s said to be an all-in-one system that controls power-outlets and devices through it’s core service and offering the option to add Kinect based speech recognition to say things like “Computer, Lights!”.

Unfortunately it comes with quite high and hard requirements when it comes to hardware it’s compatible with. A kinect possible exists in your household but I doubt that you got the Insteon hardware to control out devices with.

That seems to be the main problem of all current home automation solutions – you just have to have the according hardware to use them. It’s not quite possible to use anything and everything in a standardized way. Maybe it’s time to have a “home plug’n’play” specification set-up for all hard- and software vendors to follow?

Source 1: http://www.castleos.com

I will be speaking at Open Source Data Center conference 2013

I plan to speak at a couple of conferences this year – first in the line will be the Open Source Data Center conference in Nuremberg.

OSDC_Logo_500_Date_ohne Schatten

“The Open Source Data Center Conference, with a changing focus from year to year, offers you the unique possibility to meet international OS-experts, to benefit from their comprehensive experience and to gain the latest know-how for the daily practice. The conference is especially adapted to experienced administrators and architects.”

The topic I will be talking about (in german though) is our fully virtualized data center testing environment at Rakuten Germany.

Bildschirmfoto 2013-01-16 um 13.54.47

When you want to change things from “testing in production” to “testing in a test environment” it’s usually a very hard way to go. In this case we chose the way to virtualize whatever service was in the datacenter, with all the same configurations and even network settings. We called that “Ignition” and it allows us to test almost any aspect of our production environment without interfereing with it. My talk will cover the thoughts and technologies behind that.

I also want to stress the fact that there are a lot more interesting talks than mine. Go to the OSDC 2013 homepage and find out for yourself.

Source 1: http://www.netways.de/osdc

h.a.c.s. html5 user interface re-implemented

Slow is the right word to describe my html and javascript learning-by-doing progress right now. I have chosen the h.a.c.s. user interface as a valid project to learn html and javascript up to a point where I can start to write useable websites with it. The h.a.c.s. ui seemed to be a good choice because it’s at the moment only used by my family and they are a bunch of battle-proven beta testers.

So first a small video to get an idea what I am implementing right now:

So all you can see is SVG and HTML rendered stuff – made with the help of awesome javascript libraries, as there are:

  • jQuery
    • for the basic javascript coverage
  • Raphaël
    • to draw svg in a human-controllable
  • JustGage
    • to draw those nice gauges
  • OdoMeter
    • an animated HTML5 canvas odometer

I plan to add a lot more – like for swiping gestures. So this will be – just like h.a.c.s – a continuous project. Since I switched to OS X entirely at home I use the great Coda2 to write and debug the code. It helps a lot to have two browser set-up because for some reason I still not feel that well with the WebKit Web Inspector.

Bildschirmfoto 2013-01-06 um 20.47.22

Another great feature of Coda2 is the AirPreview – which means it will preview your current page in the editor on an iOS device running DietCoda – oh how I love those automations.

So I reached the first goal set for myself for the user interface: It’s doing the things the old UI did and it’s maintainable in addition. I am still struggling with javascript here and there – mainly because the debugging and tracing is oh-so-difficult (or I am to slow understanding).

If you got any recommendation for a javascript editor that can handle multiple includes and debugging (step-by-step, …) and good tracing for events please comment!

Source 1: jQuery
Source 2: Raphaël
Source 3: JustGage
Source 4: OdoMeter

N.O-T/MY-D/E.PA/R.T-ME

Every year between Christmas and New Years the hackers of the (mostly european) world gather for the Chaos Communication Congress. This year for the 29th time. The 29c3 takes place where it all started – in Hamburg. This years subtitle is:

Bildschirmfoto 2012-12-25 um 22.14.48

Since the reports are already in that the fairydust has landed successfully in Hamburg there’s even a proof picture for it:

A-voJ9DCEAAp8RH

Since FeM is already preparing it will be great to ‘attend’ the congress via live streams of all lectures.

Source 1: https://events.ccc.de/congress/2012/wiki/Main_Page
Source 2: http://blog.fem.tu-ilmenau.de/archives/836-Reisetagebuch-Mal-kurz-Hamburg.html

there’s a hackerspace in Bamberg

The other day I found out that there is an actual hackerspace in Bamberg – the city where I work and live nearby. For some strange reason it never occurred to me to search for an hackerspace nearby. But now since the 29c3 is at the gates I found them on the “Congress everywhere” pages (beware, it’s having a hard time right now).

Richterprodukt_v2_banner
cc-by richterprodukt

Since I just found it and christmas duties take their toll I wasn’t able to go by and talk to the people there in person – i’ve just contacted them over their IRC channel (#backspace on freenode). Eventually I will have time to visit them and I’ll have a report up here then.

For the time being enjoy their website and the projects they already did. Apparently there are some very interesting LED lighting experiments.

Source: http://www.hackerspace-bamberg.de/

know your numbers!

Wikipedia describes latency this way:

“Latency is a measure of time delay experienced in a system, the precise definition of which depends on the system and the time being measured. In communications, the lower limit of latency is determined by the medium being used for communications. In reliable two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is “in-flight” at any one moment. In the field of human-machine interaction, perceptible latency has a strong effect on user satisfaction and usability.” (Wikipedia)

Given that it’s quite important for any developer to know his numbers. Since latency has a huge impact on how software should be architected it’s important to keep that in mind:

 

Bildschirmfoto 2012-12-25 um 21.28.20

 

Source: http://www.eecs.berkeley.edu/~rcs/research/interactive_latency.html

putting h.a.c.s. (or other) sensory data into a motion based webcam image

I am using some Raspberry Pis to monitor the areas around the house. Mainly because it’s awesome to see how many animals are roaming around in your garden throughout the day. On the Pi I am using the current Debian image and motion to interface with an USB webcam.

Now I wanted to include sensory data into the webcam images – like the current temperature. The nice thing about h.a.c.s. is that it can deliver every sensors data in nice and easy to use JSON. The only challenge now is to get the number into motion.

First of all I need to get the URL together where I can access sensor data for the right sensor. In this case it’s the sensor called “Schuppen” – an outdoor sensor measuring the current temperature around the house.

Bildschirmfoto 2012-12-16 um 00.37.37

Now there is an easy way to ‘feed’ data into a running motion instance. Motion offers a control port and allows to set the text_left and text_right properties. Doing a simple GET request there allows us to set the text to – in this example – “remote-controlled-text”:

Bildschirmfoto 2012-12-16 um 00.52.56

So – that’s how the text is set – now how to get the temperature value, and just that, out of the JSON response of h.a.c.s.? Easy – use jsawk!

Bildschirmfoto 2012-12-16 um 01.02.07

With all that a very small shell script is quickly hacked:

Bildschirmfoto 2012-12-16 um 01.05.38

If you want to copy that into your editor, here’s the code:

#!/bin/bash
TEMPERATURE=`curl -s 'http://hacs/data/sensor?name=Schuppen&type=temperature&lastentry=true' | jsawk 'return this.data[0][1]'`
curl -s 'http://localhost:8080/0/config/set?text_left='$TEMPERATURE

Localhost port 8080 is the port and adress of the motion control server .

To have the webcam updated regularly, I added it to crontab and from now on the current temperature is in every webcam image – hurray!

Source 1: motion
Source 2: jsawk

Build a Brain – SPAUN

SPAUN or Semantic Pointer Architecture Unified Network is a promising next step in the pursuit to simulate a human brain. Built upon the Nengo Neural Simulator scientists at the University in Waterloo/Ontario were able to report on their first break-through results.

In 2013 there will be a book from Oxford University press called ‘How to build a brain’ which will describe in depth what made the astonishing results possible.

But what are the results?

Well that looks like number recognition. In fact that’s what it is. SPAUN – that’s how the scientists refer to their frankenstein-brain – is capable of solving 8 different tasks now. One of them is number recognition. There are videos of all 8 tasks being performed.

The Semantic Pointers are named after the pointers usually common in computer science:

“Higher-level cognitive functions in biological systems are made possible by semantic pointers. Semantic pointers are neural representations that carry partial semantic content and are composable into the representational structures necessary to support complex cognition.

The term ‘semantic pointer’ was chosen because the representations in the architecture are like ‘pointers’ in computer science (insofar as they can be ‘dereferenced’ to access large amounts of information which they do not directly carry). However, they are ‘semantic’ (unlike pointers in computer science) because these representations capture relations in a semantic vector space in virtue of their distances to one another, as typically envisaged by connectionists. “

Source 1: http://nengo.ca/build-a-brain
Source 2: http://nengo.ca/build-a-brain/spaunvideos/

 

being there, without being there: Good Night Lamp

Isn’t technology great when it brings families closer together, even when they are thousand miles apart?

Home automation does not only mean that you are going to flip some switches and sensor away in every imaginable way. It also means creativity. And being creative with the functionalities at hand is really what makes home automation so interesting.

It’s those creative ways that adds use to the nerdy home automation switches and sensors. It’s what adds practicality.

Good Night Lamp is such a creative solution that makes use of home automation hardware and the internet. To understand the concept, watch a video:

“The Good Night Lamp is a family of connected lamps that lets you communicate the act of coming back home to your loved ones, remotely.”

Well I don’t know if it really needs specialized hardware like those Good Night Lamp products. But certainly if you have some sensory and the ability to flip switches it is fairly easy to come up with workflows and things that should happen when the circumstances are right. In fact I do not believe in highly specialized products like a single-purpose lamp. But I do believe, if those lamps are connected to a network and if you can access them through some sort of API, that those types of products will pave the way to a connected world we only know from science fiction yet.

Another good solution to this is the long promised IP capable light bulb. Engineers were using the “light bulb with an ip adress” as an example for IPv6 for years now. And it seems that the time has come when we really want to assign an ip adress to every lightbulb in our home.

LIFX is a good start concept and in a couple of months there will be more manufacturers who are offering networked light bulb solutions.

 

Source 1: http://goodnightlamp.com/
Source 2: http://lifx.co/

ELV MAX! Cube C# Library – control your cube!

I was asked if it would be possible to get the ELV MAX! Cube interfacing functionality outside of h.a.c.s. – maybe as a library. Sure! That is possible. And to speed up things I give you the ELV MAX! Cube C# Library called: MAXSharp

It’s a plain and simple library without much dependencies – in fact there’s only some threading and the FastSerializer. Since I am using this library with h.a.c.s. as well I did not remove the serializer implementation.

There’s a small demo program included which is called MAXSharpExample. The library itself contains the abstractions necessary to get information from the ELV MAX! Cube. It does not contain functionality to control the cube – if you want to add, feel free it’s all open sourced and I would love to see pull requests!

The architecture is based upon polling – I know events would make a cleaner view but for various reasons I am using queues in h.a.c.s. and therefore MAXSharp does as well. The example application spins up the ELV MAX interfacing / handling thread and as soon as you’re connected you can access all house related information and get diff-events from the cube.

Any comment is appreciated!

Source 1: State of Reverse Engineering
Source 2: https://github.com/bietiekay/MAXSharp

if this than that – simple recipes for home automation

Workflows are important – and having a lot of switching possiblities and even more sensors that measure things it begins to become important to be able to implement workflows behind all that hardware.

It’s nice to be able to switch light on and of when you want to. But isn’t it even better to have some sort of workflow behind all sorts of triggers. Think of the possibilities!

If this then that is a service to help you define very simple workflows:

Want an example?

It knows a lot of ‘this’ and a lot of ‘that’. So give it a try or even better, add your own home automation software as ‘this’ and ‘that’ :-)

Source 1: https://ifttt.com

Blogroll: Nerdcore NC-Sources OPML

A couple of days ago the well known and much read Nerdcore weblog author created a page he calls NC-Sources which lists all the sources he has in his RSS reader to get new information from. As you can imagine, this is pure gold for those who want to get interesting links to all-nerd pages.

Unfortunately NC-Sources is just available as a web-page which lists the name and the RSS feed URL. You cannot import that into your RSS Reader to use it for your own informational needs.

Here I am to the rescue. I’ve taken all the URLs from that NC Source page. That resulted in a file that lists the page url and the rss-feed url in alternating lines. A short trip to the command line and the use of awk helped to filter just the rss-feed urls to a new file and that was filled into an opml generator.

So now you can download the OPML file to import it into your own RSS reader. Get it here.

Source 1: NC-Sources
Source 2: NC-Sources OPML File
Source 3: OPMLBuilder

eBook: How To Create A Mind

Those who know me well know that I am a strong believer of artificial intelligence. We’re not there yet. Not even close, I don’t know if we (as in humanity) even left the launchpad. But I strongly believe that it will be possible to simulate human thoughts – maybe not in the way AI is defined:

“The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine.This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings, issues which have been addressed by myth, fiction and philosophy since antiquity.Artificial intelligence has been the subject of optimism,but has also suffered setbacksand, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.” (Wikipedia)

More on that in another article in the future since I started working on that subject earlier and now I come across a lot of authors and mostly science fiction books that deal with that topic.

Now there is a new book by Ray Kurzweil. It’s called “How To Create A Mind” and deals with the topic of how human thoughts come to be and how the human mind seems to work.

“Now, in his much-anticipated How to Create a Mind, he takes this exploration to the next step:  reverse-engineering the brain to understand precisely how it works, then applying that knowledge to create vastly intelligent machines.

Drawing on the most recent neuroscience research, his own research and inventions in artificial intelligence, and compelling thought experiments, he describes his new theory of how the neocortex (the thinking part of the brain) works: as a self-organizing hierarchical system of pattern recognizers. Kurzweil shows how these insights will enable us to greatly extend the powers of our own mind and provides a roadmap for the creation of superintelligence—humankind’s most exciting next venture. We are now at the dawn of an era of radical possibilities in which merging with our technology will enable us to effectively address the world’s grand challenges.”

Source 1: http://howtocreateamind.com/
Source 2: http://en.wikipedia.org/wiki/Artificial_intelligence

extending the house storage

In times when mobile phone cameras produce pictures of 2 MBytes each and decent DSLR cameras produce pictures in the range of more than 20 Mbytes each – not speaking of the various sensors around the house the question of how all of this is going to be stored is an interesting one.

Prices for mass storage is dropping for years and sized of hard disks are getting bigger and bigger. 3 Tbyte drives are fairly cheap now. Cheap enough to consider serious redundancy even for home use.

Having that home automation hobby and having very specific needs when it comes to home entertainment or even watching TV (we don’t watch live-tv…) we have a relatively huge demand for storage space. That way we are already storing over 10 Tbyte of data, fully encrypted, redundant and backed-up.

Our file server infrastructure grew with the needs over the years.

It started way back in 2003 when I set-up the first fileserver for my apartment back then. It was a fairly huge 19 inch case with 5 hard disks (100 Gbyte each). This machine was filled in 2005 and needed replacement.

We’re in IDE land back then. Because the system hardware died on me due to a power surge all the disks and a new mainboard were seated in a new case with room for a lot of disks.

One interesting detail might be that I consistently used Windows Server for that purpose.

The machine always wasn’t just a fileserver. It was smtp, imap, nntp and media server all the time. That lead to a growing demand of CPU and memory resources. It started with an 800 Mhz AMD Athlon (which died quickly) and for the next years to come I used a 2.8 Ghz Intel Pentium 4. Everything started with Windows Server 2003 – bought in the Microsoft Store when I was a Microsoft employee.

Diskspace demand kept growing and in 2009 a new case, new mainboard + memory and new disks where due.

Since 2009 a Core4Quad Q9550 with 2.8 Ghz and 16 Gbyte of Memory is the heart of our fileserver. Since we’re frequently live-transcoding video streams to feed iPads and iPhones around the house that machine has plenty of grunt to feed the demand. We can have 2 iPhones and 2 iPads playing 720p content without getting stutters. Back in 2009 we also switched to a mixed IDE and SATA setup as you can see in the picture:

Plenty of room when the new case arrived – it was getting crowded just 2 years later in 2011. Every seat was taken – which means 13 disks are in that case and 1 attached through USB.

That adds up to more than 16 Tbyte of raw storage. In 2011 we also upgraded to Windows Server 2008. We never lost a bit with that operating system, not under the heaviest load and even through serious hardware malfunctions. A lot of disks of those 13 died throughout the years: Almost 1 every 2 months was replaced – most of them through extended waranties – of course we have a spare always ready to take the place. Only one time I had to rush to a store to get a replacement drive when two disks failed short after each other. That’s why there’s that 2 Tbyte drive in the 1.5 Tbyte compound…

So it’s getting full again. Since that case isn’t really holding more disks and replacing them is getting harder because of the tight fit the idea was born to now add a bigger case but to just add a NAS/SAN which holds between 6 to 8 disks at once, comes with it’s own redundancy management and exports one big iSCSI volume.

That said a network card was added to the fileserver and a QNAP TS-859 Pro+ 8-bay appliance was bought. This one is a shiny black device which uses less power then an aditional case with extra cpu and memory would have use and after calculating through a number of combinations it’s even the cheapest solution for an 8 drive set-up.

After some intensive testing it seems that the iSCSI approach is the most robust one. Since I am just done with testing the appliance the next step is to buy drives. So stay tuned!

Source 1: http://www.qnap.com/de/index.php?lang=de&sn=375&c=292&sc=528&t=532&n=3486

What happened to: realtime Radiosity lighting

Back in 2006 I wrote about a new technology which the also new company Geomerics was demoeing.

Back in 2006 everything was just a demo. Now it seems that Geomerics found some very well known customers and without noticing a lot of the current generation games graphics beauty comes from the capabilities real time radiosity lighting is adding to the graphics.

“Geomerics delivers cutting-edge graphics technology to customers in the games and entertainment industries. Geomerics’ Enlighten technology is behind the lighting in best-selling titles including Battlefield 3, Need for Speed: The Run, Eve Online and Quantum Conundrum. Enlighten has been licensed by many of the top developers in the industry, including EA DICE, EA Bioware, THQ, Take 2 and Square Enix.” (Source)

There even is a more updated version of the demo video:

Source 1: real time radiosity lighting article from 2006
Source 2: Geomerics Presentations
Source 3: More Geomerics Media

Realtime Video Effects: Time Remap

With todays processing power and the faults of current generation digital video cameras you can have a lot of fun – if you know how:

The above demonstrated effect is called Time Remapping. The description of the video tells us more about the effect itself:

The effect was discovered accidentally by a photographer called Jacques Henri Lartigues at the beginning of the 20th century (in 1912 to be precise). He took a picture of a race car with eliptical deformed tires – an effect caused by the characteristics of the camera he was using.

Source 1: http://vimeo.com/7878518
Source 2: http://en.wikipedia.org/wiki/Jacques_Henri_Lartigue
Source 3: http://bokeh.fr/blog/photographes/la-voiture-deformee-de-jacques-henri-lartigue/