N.O-T/MY-D/E.PA/R.T-ME

Every year between Christmas and New Years the hackers of the (mostly european) world gather for the Chaos Communication Congress. This year for the 29th time. The 29c3 takes place where it all started – in Hamburg. This years subtitle is:

Bildschirmfoto 2012-12-25 um 22.14.48

Since the reports are already in that the fairydust has landed successfully in Hamburg there’s even a proof picture for it:

A-voJ9DCEAAp8RH

Since FeM is already preparing it will be great to ‘attend’ the congress via live streams of all lectures.

Source 1: https://events.ccc.de/congress/2012/wiki/Main_Page
Source 2: http://blog.fem.tu-ilmenau.de/archives/836-Reisetagebuch-Mal-kurz-Hamburg.html

there’s a hackerspace in Bamberg

The other day I found out that there is an actual hackerspace in Bamberg – the city where I work and live nearby. For some strange reason it never occurred to me to search for an hackerspace nearby. But now since the 29c3 is at the gates I found them on the “Congress everywhere” pages (beware, it’s having a hard time right now).

Richterprodukt_v2_banner
cc-by richterprodukt

Since I just found it and christmas duties take their toll I wasn’t able to go by and talk to the people there in person – i’ve just contacted them over their IRC channel (#backspace on freenode). Eventually I will have time to visit them and I’ll have a report up here then.

For the time being enjoy their website and the projects they already did. Apparently there are some very interesting LED lighting experiments.

Source: http://www.hackerspace-bamberg.de/

know your numbers!

Wikipedia describes latency this way:

“Latency is a measure of time delay experienced in a system, the precise definition of which depends on the system and the time being measured. In communications, the lower limit of latency is determined by the medium being used for communications. In reliable two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is “in-flight” at any one moment. In the field of human-machine interaction, perceptible latency has a strong effect on user satisfaction and usability.” (Wikipedia)

Given that it’s quite important for any developer to know his numbers. Since latency has a huge impact on how software should be architected it’s important to keep that in mind:

 

Bildschirmfoto 2012-12-25 um 21.28.20

 

Source: http://www.eecs.berkeley.edu/~rcs/research/interactive_latency.html

putting h.a.c.s. (or other) sensory data into a motion based webcam image

I am using some Raspberry Pis to monitor the areas around the house. Mainly because it’s awesome to see how many animals are roaming around in your garden throughout the day. On the Pi I am using the current Debian image and motion to interface with an USB webcam.

Now I wanted to include sensory data into the webcam images – like the current temperature. The nice thing about h.a.c.s. is that it can deliver every sensors data in nice and easy to use JSON. The only challenge now is to get the number into motion.

First of all I need to get the URL together where I can access sensor data for the right sensor. In this case it’s the sensor called “Schuppen” – an outdoor sensor measuring the current temperature around the house.

Bildschirmfoto 2012-12-16 um 00.37.37

Now there is an easy way to ‘feed’ data into a running motion instance. Motion offers a control port and allows to set the text_left and text_right properties. Doing a simple GET request there allows us to set the text to – in this example – “remote-controlled-text”:

Bildschirmfoto 2012-12-16 um 00.52.56

So – that’s how the text is set – now how to get the temperature value, and just that, out of the JSON response of h.a.c.s.? Easy – use jsawk!

Bildschirmfoto 2012-12-16 um 01.02.07

With all that a very small shell script is quickly hacked:

Bildschirmfoto 2012-12-16 um 01.05.38

If you want to copy that into your editor, here’s the code:

#!/bin/bash
TEMPERATURE=`curl -s 'http://hacs/data/sensor?name=Schuppen&type=temperature&lastentry=true' | jsawk 'return this.data[0][1]'`
curl -s 'http://localhost:8080/0/config/set?text_left='$TEMPERATURE

Localhost port 8080 is the port and adress of the motion control server .

To have the webcam updated regularly, I added it to crontab and from now on the current temperature is in every webcam image – hurray!

Source 1: motion
Source 2: jsawk

Build a Brain – SPAUN

SPAUN or Semantic Pointer Architecture Unified Network is a promising next step in the pursuit to simulate a human brain. Built upon the Nengo Neural Simulator scientists at the University in Waterloo/Ontario were able to report on their first break-through results.

In 2013 there will be a book from Oxford University press called ‘How to build a brain’ which will describe in depth what made the astonishing results possible.

But what are the results?

Well that looks like number recognition. In fact that’s what it is. SPAUN – that’s how the scientists refer to their frankenstein-brain – is capable of solving 8 different tasks now. One of them is number recognition. There are videos of all 8 tasks being performed.

The Semantic Pointers are named after the pointers usually common in computer science:

“Higher-level cognitive functions in biological systems are made possible by semantic pointers. Semantic pointers are neural representations that carry partial semantic content and are composable into the representational structures necessary to support complex cognition.

The term ‘semantic pointer’ was chosen because the representations in the architecture are like ‘pointers’ in computer science (insofar as they can be ‘dereferenced’ to access large amounts of information which they do not directly carry). However, they are ‘semantic’ (unlike pointers in computer science) because these representations capture relations in a semantic vector space in virtue of their distances to one another, as typically envisaged by connectionists. “

Source 1: http://nengo.ca/build-a-brain
Source 2: http://nengo.ca/build-a-brain/spaunvideos/

 

being there, without being there: Good Night Lamp

Isn’t technology great when it brings families closer together, even when they are thousand miles apart?

Home automation does not only mean that you are going to flip some switches and sensor away in every imaginable way. It also means creativity. And being creative with the functionalities at hand is really what makes home automation so interesting.

It’s those creative ways that adds use to the nerdy home automation switches and sensors. It’s what adds practicality.

Good Night Lamp is such a creative solution that makes use of home automation hardware and the internet. To understand the concept, watch a video:

“The Good Night Lamp is a family of connected lamps that lets you communicate the act of coming back home to your loved ones, remotely.”

Well I don’t know if it really needs specialized hardware like those Good Night Lamp products. But certainly if you have some sensory and the ability to flip switches it is fairly easy to come up with workflows and things that should happen when the circumstances are right. In fact I do not believe in highly specialized products like a single-purpose lamp. But I do believe, if those lamps are connected to a network and if you can access them through some sort of API, that those types of products will pave the way to a connected world we only know from science fiction yet.

Another good solution to this is the long promised IP capable light bulb. Engineers were using the “light bulb with an ip adress” as an example for IPv6 for years now. And it seems that the time has come when we really want to assign an ip adress to every lightbulb in our home.

LIFX is a good start concept and in a couple of months there will be more manufacturers who are offering networked light bulb solutions.

 

Source 1: http://goodnightlamp.com/
Source 2: http://lifx.co/

ELV MAX! Cube C# Library – control your cube!

I was asked if it would be possible to get the ELV MAX! Cube interfacing functionality outside of h.a.c.s. – maybe as a library. Sure! That is possible. And to speed up things I give you the ELV MAX! Cube C# Library called: MAXSharp

It’s a plain and simple library without much dependencies – in fact there’s only some threading and the FastSerializer. Since I am using this library with h.a.c.s. as well I did not remove the serializer implementation.

There’s a small demo program included which is called MAXSharpExample. The library itself contains the abstractions necessary to get information from the ELV MAX! Cube. It does not contain functionality to control the cube – if you want to add, feel free it’s all open sourced and I would love to see pull requests!

The architecture is based upon polling – I know events would make a cleaner view but for various reasons I am using queues in h.a.c.s. and therefore MAXSharp does as well. The example application spins up the ELV MAX interfacing / handling thread and as soon as you’re connected you can access all house related information and get diff-events from the cube.

Any comment is appreciated!

Source 1: State of Reverse Engineering
Source 2: https://github.com/bietiekay/MAXSharp

if this than that – simple recipes for home automation

Workflows are important – and having a lot of switching possiblities and even more sensors that measure things it begins to become important to be able to implement workflows behind all that hardware.

It’s nice to be able to switch light on and of when you want to. But isn’t it even better to have some sort of workflow behind all sorts of triggers. Think of the possibilities!

If this then that is a service to help you define very simple workflows:

Want an example?

It knows a lot of ‘this’ and a lot of ‘that’. So give it a try or even better, add your own home automation software as ‘this’ and ‘that’ :-)

Source 1: https://ifttt.com

Blogroll: Nerdcore NC-Sources OPML

A couple of days ago the well known and much read Nerdcore weblog author created a page he calls NC-Sources which lists all the sources he has in his RSS reader to get new information from. As you can imagine, this is pure gold for those who want to get interesting links to all-nerd pages.

Unfortunately NC-Sources is just available as a web-page which lists the name and the RSS feed URL. You cannot import that into your RSS Reader to use it for your own informational needs.

Here I am to the rescue. I’ve taken all the URLs from that NC Source page. That resulted in a file that lists the page url and the rss-feed url in alternating lines. A short trip to the command line and the use of awk helped to filter just the rss-feed urls to a new file and that was filled into an opml generator.

So now you can download the OPML file to import it into your own RSS reader. Get it here.

Source 1: NC-Sources
Source 2: NC-Sources OPML File
Source 3: OPMLBuilder

eBook: How To Create A Mind

Those who know me well know that I am a strong believer of artificial intelligence. We’re not there yet. Not even close, I don’t know if we (as in humanity) even left the launchpad. But I strongly believe that it will be possible to simulate human thoughts – maybe not in the way AI is defined:

“The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine.This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings, issues which have been addressed by myth, fiction and philosophy since antiquity.Artificial intelligence has been the subject of optimism,but has also suffered setbacksand, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.” (Wikipedia)

More on that in another article in the future since I started working on that subject earlier and now I come across a lot of authors and mostly science fiction books that deal with that topic.

Now there is a new book by Ray Kurzweil. It’s called “How To Create A Mind” and deals with the topic of how human thoughts come to be and how the human mind seems to work.

“Now, in his much-anticipated How to Create a Mind, he takes this exploration to the next step:  reverse-engineering the brain to understand precisely how it works, then applying that knowledge to create vastly intelligent machines.

Drawing on the most recent neuroscience research, his own research and inventions in artificial intelligence, and compelling thought experiments, he describes his new theory of how the neocortex (the thinking part of the brain) works: as a self-organizing hierarchical system of pattern recognizers. Kurzweil shows how these insights will enable us to greatly extend the powers of our own mind and provides a roadmap for the creation of superintelligence—humankind’s most exciting next venture. We are now at the dawn of an era of radical possibilities in which merging with our technology will enable us to effectively address the world’s grand challenges.”

Source 1: http://howtocreateamind.com/
Source 2: http://en.wikipedia.org/wiki/Artificial_intelligence

extending the house storage

In times when mobile phone cameras produce pictures of 2 MBytes each and decent DSLR cameras produce pictures in the range of more than 20 Mbytes each – not speaking of the various sensors around the house the question of how all of this is going to be stored is an interesting one.

Prices for mass storage is dropping for years and sized of hard disks are getting bigger and bigger. 3 Tbyte drives are fairly cheap now. Cheap enough to consider serious redundancy even for home use.

Having that home automation hobby and having very specific needs when it comes to home entertainment or even watching TV (we don’t watch live-tv…) we have a relatively huge demand for storage space. That way we are already storing over 10 Tbyte of data, fully encrypted, redundant and backed-up.

Our file server infrastructure grew with the needs over the years.

It started way back in 2003 when I set-up the first fileserver for my apartment back then. It was a fairly huge 19 inch case with 5 hard disks (100 Gbyte each). This machine was filled in 2005 and needed replacement.

We’re in IDE land back then. Because the system hardware died on me due to a power surge all the disks and a new mainboard were seated in a new case with room for a lot of disks.

One interesting detail might be that I consistently used Windows Server for that purpose.

The machine always wasn’t just a fileserver. It was smtp, imap, nntp and media server all the time. That lead to a growing demand of CPU and memory resources. It started with an 800 Mhz AMD Athlon (which died quickly) and for the next years to come I used a 2.8 Ghz Intel Pentium 4. Everything started with Windows Server 2003 – bought in the Microsoft Store when I was a Microsoft employee.

Diskspace demand kept growing and in 2009 a new case, new mainboard + memory and new disks where due.

Since 2009 a Core4Quad Q9550 with 2.8 Ghz and 16 Gbyte of Memory is the heart of our fileserver. Since we’re frequently live-transcoding video streams to feed iPads and iPhones around the house that machine has plenty of grunt to feed the demand. We can have 2 iPhones and 2 iPads playing 720p content without getting stutters. Back in 2009 we also switched to a mixed IDE and SATA setup as you can see in the picture:

Plenty of room when the new case arrived – it was getting crowded just 2 years later in 2011. Every seat was taken – which means 13 disks are in that case and 1 attached through USB.

That adds up to more than 16 Tbyte of raw storage. In 2011 we also upgraded to Windows Server 2008. We never lost a bit with that operating system, not under the heaviest load and even through serious hardware malfunctions. A lot of disks of those 13 died throughout the years: Almost 1 every 2 months was replaced – most of them through extended waranties – of course we have a spare always ready to take the place. Only one time I had to rush to a store to get a replacement drive when two disks failed short after each other. That’s why there’s that 2 Tbyte drive in the 1.5 Tbyte compound…

So it’s getting full again. Since that case isn’t really holding more disks and replacing them is getting harder because of the tight fit the idea was born to now add a bigger case but to just add a NAS/SAN which holds between 6 to 8 disks at once, comes with it’s own redundancy management and exports one big iSCSI volume.

That said a network card was added to the fileserver and a QNAP TS-859 Pro+ 8-bay appliance was bought. This one is a shiny black device which uses less power then an aditional case with extra cpu and memory would have use and after calculating through a number of combinations it’s even the cheapest solution for an 8 drive set-up.

After some intensive testing it seems that the iSCSI approach is the most robust one. Since I am just done with testing the appliance the next step is to buy drives. So stay tuned!

Source 1: http://www.qnap.com/de/index.php?lang=de&sn=375&c=292&sc=528&t=532&n=3486

What happened to: realtime Radiosity lighting

Back in 2006 I wrote about a new technology which the also new company Geomerics was demoeing.

Back in 2006 everything was just a demo. Now it seems that Geomerics found some very well known customers and without noticing a lot of the current generation games graphics beauty comes from the capabilities real time radiosity lighting is adding to the graphics.

“Geomerics delivers cutting-edge graphics technology to customers in the games and entertainment industries. Geomerics’ Enlighten technology is behind the lighting in best-selling titles including Battlefield 3, Need for Speed: The Run, Eve Online and Quantum Conundrum. Enlighten has been licensed by many of the top developers in the industry, including EA DICE, EA Bioware, THQ, Take 2 and Square Enix.” (Source)

There even is a more updated version of the demo video:

Source 1: real time radiosity lighting article from 2006
Source 2: Geomerics Presentations
Source 3: More Geomerics Media

Realtime Video Effects: Time Remap

With todays processing power and the faults of current generation digital video cameras you can have a lot of fun – if you know how:

The above demonstrated effect is called Time Remapping. The description of the video tells us more about the effect itself:

The effect was discovered accidentally by a photographer called Jacques Henri Lartigues at the beginning of the 20th century (in 1912 to be precise). He took a picture of a race car with eliptical deformed tires – an effect caused by the characteristics of the camera he was using.

Source 1: http://vimeo.com/7878518
Source 2: http://en.wikipedia.org/wiki/Jacques_Henri_Lartigue
Source 3: http://bokeh.fr/blog/photographes/la-voiture-deformee-de-jacques-henri-lartigue/