there’s a hackerspace in Bamberg

The other day I found out that there is an actual hackerspace in Bamberg – the city where I work and live nearby. For some strange reason it never occurred to me to search for an hackerspace nearby. But now since the 29c3 is at the gates I found them on the “Congress everywhere” pages (beware, it’s having a hard time right now).

Richterprodukt_v2_banner
cc-by richterprodukt

Since I just found it and christmas duties take their toll I wasn’t able to go by and talk to the people there in person – i’ve just contacted them over their IRC channel (#backspace on freenode). Eventually I will have time to visit them and I’ll have a report up here then.

For the time being enjoy their website and the projects they already did. Apparently there are some very interesting LED lighting experiments.

Source: http://www.hackerspace-bamberg.de/

know your numbers!

Wikipedia describes latency this way:

“Latency is a measure of time delay experienced in a system, the precise definition of which depends on the system and the time being measured. In communications, the lower limit of latency is determined by the medium being used for communications. In reliable two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is “in-flight” at any one moment. In the field of human-machine interaction, perceptible latency has a strong effect on user satisfaction and usability.” (Wikipedia)

Given that it’s quite important for any developer to know his numbers. Since latency has a huge impact on how software should be architected it’s important to keep that in mind:

 

Bildschirmfoto 2012-12-25 um 21.28.20

 

Source: http://www.eecs.berkeley.edu/~rcs/research/interactive_latency.html

putting h.a.c.s. (or other) sensory data into a motion based webcam image

I am using some Raspberry Pis to monitor the areas around the house. Mainly because it’s awesome to see how many animals are roaming around in your garden throughout the day. On the Pi I am using the current Debian image and motion to interface with an USB webcam.

Now I wanted to include sensory data into the webcam images – like the current temperature. The nice thing about h.a.c.s. is that it can deliver every sensors data in nice and easy to use JSON. The only challenge now is to get the number into motion.

First of all I need to get the URL together where I can access sensor data for the right sensor. In this case it’s the sensor called “Schuppen” – an outdoor sensor measuring the current temperature around the house.

Bildschirmfoto 2012-12-16 um 00.37.37

Now there is an easy way to ‘feed’ data into a running motion instance. Motion offers a control port and allows to set the text_left and text_right properties. Doing a simple GET request there allows us to set the text to – in this example – “remote-controlled-text”:

Bildschirmfoto 2012-12-16 um 00.52.56

So – that’s how the text is set – now how to get the temperature value, and just that, out of the JSON response of h.a.c.s.? Easy – use jsawk!

Bildschirmfoto 2012-12-16 um 01.02.07

With all that a very small shell script is quickly hacked:

Bildschirmfoto 2012-12-16 um 01.05.38

If you want to copy that into your editor, here’s the code:

#!/bin/bash
TEMPERATURE=`curl -s 'http://hacs/data/sensor?name=Schuppen&type=temperature&lastentry=true' | jsawk 'return this.data[0][1]'`
curl -s 'http://localhost:8080/0/config/set?text_left='$TEMPERATURE

Localhost port 8080 is the port and adress of the motion control server .

To have the webcam updated regularly, I added it to crontab and from now on the current temperature is in every webcam image – hurray!

Source 1: motion
Source 2: jsawk

Build a Brain – SPAUN

SPAUN or Semantic Pointer Architecture Unified Network is a promising next step in the pursuit to simulate a human brain. Built upon the Nengo Neural Simulator scientists at the University in Waterloo/Ontario were able to report on their first break-through results.

In 2013 there will be a book from Oxford University press called ‘How to build a brain’ which will describe in depth what made the astonishing results possible.

But what are the results?

Well that looks like number recognition. In fact that’s what it is. SPAUN – that’s how the scientists refer to their frankenstein-brain – is capable of solving 8 different tasks now. One of them is number recognition. There are videos of all 8 tasks being performed.

The Semantic Pointers are named after the pointers usually common in computer science:

“Higher-level cognitive functions in biological systems are made possible by semantic pointers. Semantic pointers are neural representations that carry partial semantic content and are composable into the representational structures necessary to support complex cognition.

The term ‘semantic pointer’ was chosen because the representations in the architecture are like ‘pointers’ in computer science (insofar as they can be ‘dereferenced’ to access large amounts of information which they do not directly carry). However, they are ‘semantic’ (unlike pointers in computer science) because these representations capture relations in a semantic vector space in virtue of their distances to one another, as typically envisaged by connectionists. “

Source 1: http://nengo.ca/build-a-brain
Source 2: http://nengo.ca/build-a-brain/spaunvideos/

 

being there, without being there: Good Night Lamp

Isn’t technology great when it brings families closer together, even when they are thousand miles apart?

Home automation does not only mean that you are going to flip some switches and sensor away in every imaginable way. It also means creativity. And being creative with the functionalities at hand is really what makes home automation so interesting.

It’s those creative ways that adds use to the nerdy home automation switches and sensors. It’s what adds practicality.

Good Night Lamp is such a creative solution that makes use of home automation hardware and the internet. To understand the concept, watch a video:

“The Good Night Lamp is a family of connected lamps that lets you communicate the act of coming back home to your loved ones, remotely.”

Well I don’t know if it really needs specialized hardware like those Good Night Lamp products. But certainly if you have some sensory and the ability to flip switches it is fairly easy to come up with workflows and things that should happen when the circumstances are right. In fact I do not believe in highly specialized products like a single-purpose lamp. But I do believe, if those lamps are connected to a network and if you can access them through some sort of API, that those types of products will pave the way to a connected world we only know from science fiction yet.

Another good solution to this is the long promised IP capable light bulb. Engineers were using the “light bulb with an ip adress” as an example for IPv6 for years now. And it seems that the time has come when we really want to assign an ip adress to every lightbulb in our home.

LIFX is a good start concept and in a couple of months there will be more manufacturers who are offering networked light bulb solutions.

 

Source 1: http://goodnightlamp.com/
Source 2: http://lifx.co/

ELV MAX! Cube C# Library – control your cube!

I was asked if it would be possible to get the ELV MAX! Cube interfacing functionality outside of h.a.c.s. – maybe as a library. Sure! That is possible. And to speed up things I give you the ELV MAX! Cube C# Library called: MAXSharp

It’s a plain and simple library without much dependencies – in fact there’s only some threading and the FastSerializer. Since I am using this library with h.a.c.s. as well I did not remove the serializer implementation.

There’s a small demo program included which is called MAXSharpExample. The library itself contains the abstractions necessary to get information from the ELV MAX! Cube. It does not contain functionality to control the cube – if you want to add, feel free it’s all open sourced and I would love to see pull requests!

The architecture is based upon polling – I know events would make a cleaner view but for various reasons I am using queues in h.a.c.s. and therefore MAXSharp does as well. The example application spins up the ELV MAX interfacing / handling thread and as soon as you’re connected you can access all house related information and get diff-events from the cube.

Any comment is appreciated!

Source 1: State of Reverse Engineering
Source 2: https://github.com/bietiekay/MAXSharp

if this than that – simple recipes for home automation

Workflows are important – and having a lot of switching possiblities and even more sensors that measure things it begins to become important to be able to implement workflows behind all that hardware.

It’s nice to be able to switch light on and of when you want to. But isn’t it even better to have some sort of workflow behind all sorts of triggers. Think of the possibilities!

If this then that is a service to help you define very simple workflows:

Want an example?

It knows a lot of ‘this’ and a lot of ‘that’. So give it a try or even better, add your own home automation software as ‘this’ and ‘that’ :-)

Source 1: https://ifttt.com

Music to listen to: Philter – The Blossom Chronicles

I am a total non-soundtrack guy. There’s no just-instrumental score which I liked so far. But there are a couple of instrumental albums made by different artists that I wished there was a movie for.

This is one of them. Or make that two – because the predecessor album of “The Blossom Chronicles” is equally great – this one is called “The Beautiful Lies”.

With sprinkles of beautiful voices, surrounded by beautiful sound-layers and beepy 8-bit sounds here and there it’s a wonderful journey into melodies and sounds. Get the two albums which are available and enjoy the flight!

Here are some free examples from the Philter homepage – go there and find more:

Tokyo At Night

I Am Nobody

Source 1: http://thephilterlounge.com/
Source 2: http://thephilterlounge.com/the-blossom-chronicles-is-out-everywhere/
Source 3: http://thephilterlounge.com/music/

Blogroll: Nerdcore NC-Sources OPML

A couple of days ago the well known and much read Nerdcore weblog author created a page he calls NC-Sources which lists all the sources he has in his RSS reader to get new information from. As you can imagine, this is pure gold for those who want to get interesting links to all-nerd pages.

Unfortunately NC-Sources is just available as a web-page which lists the name and the RSS feed URL. You cannot import that into your RSS Reader to use it for your own informational needs.

Here I am to the rescue. I’ve taken all the URLs from that NC Source page. That resulted in a file that lists the page url and the rss-feed url in alternating lines. A short trip to the command line and the use of awk helped to filter just the rss-feed urls to a new file and that was filled into an opml generator.

So now you can download the OPML file to import it into your own RSS reader. Get it here.

Source 1: NC-Sources
Source 2: NC-Sources OPML File
Source 3: OPMLBuilder

eBook: How To Create A Mind

Those who know me well know that I am a strong believer of artificial intelligence. We’re not there yet. Not even close, I don’t know if we (as in humanity) even left the launchpad. But I strongly believe that it will be possible to simulate human thoughts – maybe not in the way AI is defined:

“The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine.This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings, issues which have been addressed by myth, fiction and philosophy since antiquity.Artificial intelligence has been the subject of optimism,but has also suffered setbacksand, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.” (Wikipedia)

More on that in another article in the future since I started working on that subject earlier and now I come across a lot of authors and mostly science fiction books that deal with that topic.

Now there is a new book by Ray Kurzweil. It’s called “How To Create A Mind” and deals with the topic of how human thoughts come to be and how the human mind seems to work.

“Now, in his much-anticipated How to Create a Mind, he takes this exploration to the next step:  reverse-engineering the brain to understand precisely how it works, then applying that knowledge to create vastly intelligent machines.

Drawing on the most recent neuroscience research, his own research and inventions in artificial intelligence, and compelling thought experiments, he describes his new theory of how the neocortex (the thinking part of the brain) works: as a self-organizing hierarchical system of pattern recognizers. Kurzweil shows how these insights will enable us to greatly extend the powers of our own mind and provides a roadmap for the creation of superintelligence—humankind’s most exciting next venture. We are now at the dawn of an era of radical possibilities in which merging with our technology will enable us to effectively address the world’s grand challenges.”

Source 1: http://howtocreateamind.com/
Source 2: http://en.wikipedia.org/wiki/Artificial_intelligence

Raspberry Pi gets a camera

The first signs of the upcoming camera board for the raspberry pi are showing. During the Electronica 2012 fair RS showed the board to the public for the first time.

Since it’s going to be a 25 Euro add-on for the Pi the specification is quite impressive. The OmniVision OV5647 is used as the Image Sensor – it’s bigger brother is used in iPhone 4. OmniVision says:

“The OV5647 is OmniVision’s first 5-megapixel CMOS image sensor built on proprietary 1.4-micron OmniBSI™ backside illumination pixel architecture. OmniBSI enables the OV5647 to deliver 5-megapixel photography and high frame rate 720p/60 high-definition (HD) video capture in an industry standard camera module size of 8.5 x 8.5 x ≤5 mm, making it an ideal solution for the main stream mobile phone market.

The superior pixel performance of the OV5647 enables 720p and 1080p HD video at 30 fps with complete user control over formatting and output data transfer. Additionally, the 720p/60 HD video is captured in full field of view (FOV) with 2 x 2 binning to double the sensitivity and improve SNR. The post binning re-sampling filter helps minimize spatial and aliasing artifacts to provide superior image quality.

OmniBSI technology offers significant performance benefits over front-side illumination technology, such as increased sensitivity per unit area, improved quantum efficiency, reduced crosstalk and photo response non-uniformity, which all contribute to significant improvements in image quality and color reproduction. Additionally, OmniVision CMOS image sensors use proprietary sensor technology to improve image quality by reducing or eliminating common lighting/electrical sources of image contamination, such as fixed pattern noise and smearing to produce a clean, fully stable color image.

The low power OV5647 supports a digital video parallel port or high-speed two-lane MIPI interface, and provides full frame, windowed or binned 10-bit images in RAW RGB format. It offers all required automatic image control functions, including automatic exposure control, automatic white balance, automatic band filter, automatic 50/60 Hz luminance detection, and automatic black level calibration.”

That sensor delivers RAW RGB Imagery to the RaspberryPi through the onboard camera connector interface:

this actually is a 14 MPixel test-board and not the final 5 MPixel one…

And the part that impressed me the most is that that 5 Megapixel sensor delivers it’s raw data stream and it gets h264 compressed directly within the GPU of the Raspberry Pi. 30 frames per second 1080p without noticeable CPU load – how does that sound? – Not bad for a 50 Euro setup!

Source 1: First Demo
Source 2: OmniVision OV5647 Color CMOS QSXGA Image Sensor