“make your home smarter”, use case #7 – hear that doorbell ringing!

We love music. We love it playing loud across the house. And when we did that in the past we missed some things happening around.

Like that delivery guy ringing the front doorbell and us missing an important delivery.

This happened a lot. UNTIL we retrofitted a little PCB to our doorbell circuit to make the house aware of ringing doorbells.

Now everytime the doorbell rings a couple of things can take place.

– push notifications to all devices, screens, watches – that wakes you up even while doing workouts
– pause all audio and video playback in the house
– take a camera shot of who is in front of the door pushing the doorbell

And: It’s easy to wire up things whatever those may be in the future.

“make your home smarter”, use case #5 – the submarine light (it’s red!)

We all know it: After a long day of work you chilled out on your bean bag and fell asleep early. You gotta get up and into your bed upstairs. So usually light goes on, you go upstairs, into bed. And there you have it: You’re not sleepy anymore.

Partially this is caused by the light you turned on. If that light is bright enough and has the right color it will wake you up no matter what.

To fight this companies like Apple introduced things like “NightShift” into iPhones, iPads and Macs.

“Night Shift uses your computer’s clock and geolocation to determine when it’s sunset in your location. It then automatically shifts the colors in your display to the warmer end of the spectrum.”

Simple, eh?. Now why does your house not do that to prevent you being ripped out of sleepy state while tiptoeing upstairs?

Right! This is where the smart house will be smart.

Nowadays we’ve got all those funky LED bulbs that can be dimmed and even their colours set. Why none of those market offerings come with that simple feature is beyond me:

After sunset, when turned on, default dim to something warmer and not so bright in general.

I did implement and it’s called appropriately the “U-Boot light”. Whenever we roam around the upper floor at night time, the light that follows our steps (it’s smart enough to do that) will not go full-blast but light up dim with redish color to prevent wake-up-calls.

The smart part being that it will take into account:

– movement in the house
– sunset and dawn depending on the current geographic location of the house (more on that later, no it does not fly! (yet))
– it’ll turn on and off the light according to the path you’re walking using the various sensors around anyways

smart home use case #4 – being location aware is important

Now that you got your home entertainment reacting to you making a phone call (use case #1) as well as your current position in the played audiobook (use case #3) you might want to add some more location awareness to your house.

If your house is smart enough to know where you are, outside, inside, in what room, etc. – it might as well react on the spot.

So when you leave/enter the house:

– turn off music playing – pause it and resume when you come back
– shutdown unnecessary equipment to limit power consumption when not used and start-back up to the previous state (tvs, media centers, lights, heating) when back
– arm the cameras and motion sensors 
– start to run bandwidth intense tasks when no people using resources inside the house (like backing up machines, running updates)
– let the roomba do it’s thing
– switch communication coming from the house into different states since it’s different for notifications, managing lists and spoken commands and so on.

There’s a lot of things that that benefit from location awareness.

Bonus points for outside house awareness and representing that like a “Weasly clock”…“xxx is currently at work”.

Bonus points combo breaker for using an open-source service like Miataru (http://miataru.com/#tabr3) for location tracking outside the house.

carbon neutral house – when the sun is shining.

7 day and 30 day graphs for solar power generation, power consumption, oil burn to heat water and outside temperatures to go along with.

Having everything in a time-series-database makes such things a real blast… data wandering around all the telemetry. There are almost 300 topics to pick from and combine.

Yes, generally the solar array produces more than the whole household consumes. Except that one 26th.

Thinking about building a display showing when we are closing in to consume what has been produced in terms of electricity… something like a traffic light getting more red towards the use-up of electricity generated carbon-neutral.

How to weigh your cat! – the IoT version

This is Leela. She is a 7 year old lilac white British short hair cat that lives with us. Leela had a sister who used to live with us as well but she developed a heart condition and passed away last year. Witnessing how quickly such conditions develop and evaluate we thought that we can do something to monitor Leelas health a bit to just have some sort of pre-alert if something is changing.

Kid in a Candystore

As this Internet of Things is becoming a real thing these days I found myself in a candy store when I’ve encountered that there are a couple of really really cheap options to get a small PCB with input/output connectors into my house WiFi network.

One of the main actors of this story is the so called ESP8266. A very small and affordable system-on-a-chip that allows you to run small code portions and connect itself to a wireless network. Even better it comes with several inputs that can be used to do all sorts of wonderful things.

And so it happened that we needed to know the weight of our cat. She seemed to get a bit chubby over time and having a point of reference weight would help to get her back in shape. If you every tried to weigh a cat you know that it’s much easier said than done.

The alternative was quickly brought up: Build a WiFi-connected scale to weigh her litter box every time she is using it. And since I’ve recently bought an evaluation ESP8266 I just had to figure out how to build a scale. Looking around the house I’ve found a broken human scale (electronics fried). Maybe it could be salvaged as a part donor?

A day later I’ve done all the reading on that there is a thing called “load-cell”. Those load cells can be bought in different shapes and sizes and – when connected to a small ADC they deliver – well – a weight value.

I cracked the human scale open and tried to see what was broken. It luckily turned out to have completely fried electronics but the load-cells where good to go.

Look at this load cell:

Hardware

That brought down the part list of this project to:

  • an ESP8266 – an Adafruit Huzzah in my case
  • a HX711 ADC board to amplify and prepare the signal from the load-cells
  • a human scale with just enough space in the original case to fit the new electronics into and connect everything.

The HX711 board was the only thing I had to order hardware wise – delivered the next day and it was a matter of soldering things together and throwing in a small Arduino IDE sketch.

My soldering and wiring skills are really sub-par. But it worked from the get-go. I was able to set-up a small Arduino sketch and get measurements from the load-cells that seemed reasonable.

Now the hardware was all done – almost too easy. The software would be the important part now. In order to create something flexible I needed to make an important decision: How would the scale tell the world about it’s findings?

Software

Two basic options: PULL or PUSH?

Pull would mean that the ESP8266 would offer a webservice or at least web-server that exposes the measurements in one way or the other. It would mean that a client needs to poll for a new number in regular intervals.

Push would mean that the ESP8266 would connect to a server somewhere and whenever there’s a meaningful measurement done it would send that out to the server. With this option there would be another decision of which technology to use to push the data out.

Now a bit of history: At that time I was just about to re-implement the whole house home automation system I was using for the last 6 years with some more modern/interoperable technologies. For that project I’ve made the decision to have all events (actors and sensors) as well as some additional information being channeled into MQTT topics.

Let’s refer to Wikipedia on this:

“MQTT1 (formerly MQ Telemetry Transport) is an ISO standard (ISO/IEC PRF 20922) publish-subscribe-based “lightweight” messaging protocol for use on top of the TCP/IP protocol. It is designed for connections with remote locations where a “small code footprint” is required or the network bandwidth is limited. The publish-subscribe messaging pattern requires a message broker. Thebroker is responsible for distributing messages to interested clients based on the topic of a message. Andy Stanford-Clark and Arlen Nipper of Cirrus Link Solutions authored the first version of the protocol in 1999.”

Something build for oil-pipelines can’t be wrong for your house – can it?

So MQTT uses the notation of a “topic” to sub-address different entities within it’s network. Think of a topic as just a simple address like “house/litterbox/weight”. And with that topic MQTT allows you to set a value as well.

The alternative to MQTT would have been things like WebSockets to push events out to clients. The decision for the home-automation was done towards MQTT and so far it seems to have been the right call. More and more products and projects available are also focussing on using MQTT as their main message transport.

For the home automation I had already set-up a demo MQTT broker in the house – and so naturally the first call for the litterbox project was to utilize that.

The folks of Adafruit provide the MQTT library with their hardware and within minutes the scale started to send it’s measurements into the “house/litterbox/weight” topic of the house MQTT broker.

Some tweaking and hacking later the litterbox was put together and the actual litterbox set on-top.

Since Adafruit offers platform to also send MQTT messages towards and create neat little dashboards I have set-up a little demo dashboard that shows a selection of data being pushed from the house MQTT broker to the Adafruit.io MQTT broker.

These are the raw values which are sent into the weight topic:

You can access it here: https://io.adafruit.com/bietiekay/stappenbach

So the implementation done and used now is very simple. On start-up the ESP8622 initialises and resets the weight to 0. It’ll then do frequent weight measurements at the rate it’s configured in the source code. Those weight measurements are being monitored for certain criteria: If there’s a sudden increase it is assumed that “the cat entered the litterbox”. The weight is then monitored and averaged over time. When there’s a sudden drop of weight below a threshold that last “high” measurement is taken as the actual cat weight and sent out to a /weight topic on MQTT. The regular measurements are sent separately to also a configurable MQTT topic.

You can grab the very ugly source code of the Arduino sketch here: litterbox_sourcecode

And off course with a bit of logic this would be the calculated weight topic:

Of course it is not enough to just send data into MQTT topics and be done with it. Of course you want things like logging and data storage. Eventually we also wanted to get some sort of notification when states change or a measurement was taken.

MQTT, the cloud and self-hosted

Since MQTT is enabling a lot of scenarios to implement such actions I am going to touch just the two we are using for our house.

  1. We wanted to get a push notification to our phones whenever a weight measurement was taken – essentially whenever the cat has done something in the litterbox. The easiest solution: Set-Up a recipe on If This Than That (IFTTT) and use PushOver to send out push notifications to whatever device we want.
  2. To log and monitor in some sort of a dashboard the easiest solution seemed to be Adafruits offer. Of course hosted inside our house a combination of InfluxDB to store, Telegraf to gather and insert into InfluxDB and Chronograf to render nice graphs was the best choice.

Since most of the above can be done in the cloud (as of: outside the house with MQTT being the channel out) or inside the house with everything self-hosted. Some additional articles will cover these topics on this blog later.

There’s lots of opportunity to add more logic but as far as our experiments and requirements go we are happy with the results so far – we now regularly get a weight and the added information of how often the cat is using her litterbox. Especially for some medical conditions this is quite interesting and important information to have.

I wish there was: cheap network microphones with open source speech recognition

I was on a business trip the other day and the office space of that company was very very nice. So nice that they had all sorts of automation going on to help the people.

For example when you would run into a room where there’s no light the system would light up the room for you when it senses your presence. Very nice!

There was some lag between me entering the room, being detected and the light powering up. So while running into a dark room, knowing I would be detected and soon there would be light, I shouted “Computer! Light!” while running in.

That StarTrek reference brought an old idea back that it would be so nice to be able to control things through omnipresent speech recognition.

I am aware that there’s Siri, Cortana, Google Now. But those things are creepy because they involve external companies. If there are things listening to me all day every day, I want them to be within the premise of the house. I want to know exactly down to the data flow what is going on and sent where. I do not want to have this stuff leave the house at any times. Apart from that those services are working okayish but well…

Let alone the hardware. Usually the existing assistants are carried around in smart phones and such. Very nice if you want to touch things prior to talking to them. I don’t want to. And no, “Hey Siri!” or “OK Google” is not really what I mean. Those things are not sophisticated enough yet. I was using “Hey Siri!” for less than 24 hours. Because in the first night it seemed to have picked up something going on while I was sleeping which made it go full volume “How can I help!” on me. Yes, there’s no “don’t listen when I am sleeping” thing. Oh it does not know when I am sleeping. Well, you see: Why not?

Anyway. What I wish there was:

  • cheap hardware – a microphone(-array) possibly to put into every room. It either needs to have WiFi or LAN. Something that connects it to the network. A device that is carried around is not enough.
  • open source speech recognition – everything that is collected by the microphone is processed through an open source speech recognition tool. Full text dictation is a bonus, more importantly heavy-duty command recognition and simple interactions.
  • open source text to speech – to answer back, if wanted

And all that should be working on a basic level without internet access. Just like that.

So? Any volunteers?

when you’re working late: grant your eyes a rest

“Ever notice how people texting at night have that eerie blue glow?

Or wake up ready to write down the Next Great Idea, and get blinded by your computer screen?

During the day, computer screens look good—they’re designed to look like the sun. But, at 9PM, 10PM, or 3AM, you probably shouldn’t be looking at the sun.

f.lux fixes this: it makes the color of your computer’s display adapt to the time of day, warm at night and like sunlight during the day.

It’s even possible that you’re staying up too late because of your computer. You could use f.lux because it makes you sleep better, or you could just use it just because it makes your computer look better.”

Bildschirmfoto 2014-09-27 um 12.58.33

Source: https://justgetflux.com/

When your VU+ DUO just shows a red light and does not start up

So it happened to one of the VU+ Duos in the house. After a clean shutdown it did not boot up as expected but instead just showed the red light. It still blinked on remote keypresses and the harddisk spun up. Nothing else happened with it.

So it was bricked.

Reading the forums about that pointed to a capacitor on the board that quite regularly seems to fail. C807 is it’s name and it’s located near the Harddisk and the power-supply part of the VU+ Duo.

When I looked at the capacitor it did not seem to be faulty or anything. So without the right tools to measure I’ve decided to just give it a shot and replace the original 16V 220uF 85 degrees celsius capacitor with a 105 degrees celsius 16V 330uF one.

In my case I’ve taken out the board, to have a little bit of extra space, and cut of the old capacitor. Desoldering would be nicer looking but, well …

Replacing it on the left-over pins of the old capacitor was a matter of seconds.

After putting the board back in, the VU+ Duo powered up and booted as new. Brilliant!

Next step: Holodeck.

“The Infinadeck is the world’s first affordable omnidirectional treadmill that is designed to work both in augmented and virtual reality.  This revolutionary device provides the missing link making it now possible to have a true Holodeck experience. You might say, “Reality just got bigger”.”

[youtube]https://www.youtube.com/watch?v=GoVAOfU8UJQ[/youtube]

Source: http://www.infinadeck.com/

Nitrous – full IDE in your browser – with Collaboration!

“Nitrous is a backend development platform which helps software developers save time by cutting out the repetitive parts of creating development environments and automating them.

Once you create your first development environment, there are many features which will make development easier.”

Bildschirmfoto 2014-07-06 um 11.38.49

So what you’re getting is:

  • a virtual machine operated for you and set-up with a single click
  • A full-featured IDE in your browser
  • Code-Collaboration by inviting others to edit your project
  • a debugging environment in which you can test-run and work with your code

Here are some screenshots to get you a feel for it:

Source: https://www.nitrous.io/

Boblight Alternative: Hyperion

After setting up Boblight on two TVs in the house – one with 50 and one with 100 LEDs – I’ve used it for the last 5 months on a daily basis almost.

First of all now every screen that does not come with “added color-context” on the wall seems off. It feels like something is missing. Second of all it has made watching movies in a dark room much more enjoyable.

The only concerning factor of the past months was that the RaspberryPi does not come with a lot of computational horse-power and thus it has been operating at it’s limits all the time. With 95-99% CPU usage there’s not a lot of headroom for unexpected bitrate spikes and what-have-you.

So from time to time the Pis where struggling. With 10% CPU usage for the 50 LEDs and 19% CPU usage for the 100 LEDs set-up there was just not enough CPU power for some movies or TV streams in Full-HD.

Hyperion

So since even overclocking only slightly improved the problem of Boblight using up the precious CPU cycles for a fancy light-show I started looking around for alternatives.

“Hyperion is an opensource ‘AmbiLight’ implementation controlled using the RaspBerry Pi running Raspbmc. The main features of Hyperion are:

  • Low CPU load. For a led string of 50 leds the CPU usage will typically be below 1.5% on a non-overclocked Pi.
  • Json interface which allows easy integration into scripts.
  • A command line utility allows easy testing and configuration of the color transforms (Transformation settings are not preserved over a restart at the moment…).
  • Priority channels are not coupled to a specific led data provider which means that a provider can post led data and leave without the need to maintain a connection to Hyperion. This is ideal for a remote application (like our Android app).
  • HyperCon. A tool which helps generate a Hyperion configuration file.
  • XBMC-checker which checks the playing status of XBMC and decides whether or not to capture the screen.
  • Black border detector.
  • A scriptable effect engine.
  • Generic software architecture to support new devices and new algorithms easily.

More information can be found on the wiki or the Hyperion topic on the Raspbmc forum.”

Especially the Low CPU load did raise interest in my side.

Setting Hyperion up is easy if you just follow the very straight-forward Installation Guide. On Raspbmc the set-up took me 2 minutes at most.

If you got everything set-up on the Pi you need to generate a configuration file. It’s a nice JSON formatted config file that you do not need to create on your own – Hyperion has a nice configuration tool. Hypercon:

Screen Shot 2014-06-28 at 08.52.51

So after 2 more minutes the whole thing was set-up and running. Another 15 minutes of tweaking here and there and Hyperion replaced Boblight entirely.

What have I found so far?

  1. Hyperions network interfaces are much more controllable than those from Boblight. You can use remote clients like on iPhone / Android to set colors and/or patterns.
  2. It’s got effects for screen-saving / mood-lighting!
  3. It really just uses a lot less CPU resources. Instead of 19% CPU usage for 100 LEDs it’s down to 3-4%. That’s what I call a major improvement
  4. The processing filters that you can add really add value. Smoothing everything so that you do not get bright flashed when content flashes on-screen is easy to do and really helps with the experience.

All in all Hyperion is a recommended replacement for boblight. I would not want to switch back.

Source 1: Setting up Boblight
Source 2: https://github.com/tvdzwan/hyperion/wiki/Installation

ZFS Tutorial

“ZFS is really the final word in filesystems. With a feature set longer than this tutorial, it can take a while to master. You can set many more options per dataset, enable disk usage quotes and much more. Once you’ve used it and seen the benefits, you’ll probably never want to use anything else. Hopefully this has been helpful to get you on your way to becoming a FreeBSD ZFS master.”

Source: http://www.bsdnow.tv/tutorials/zfs

setting up boblight with a Raspberry Pi and RaspBMC

Some might know AmbiLight – a great invention by Philips that projects colored light around a TV screen based upon the contents shown. It’s a great addition to a TV but naturally only available with Philips TV sets.

Not anymore. There are several open-source projects that allow you to build your very own AmbiLight clone. I’ve built one using a 50-LEDs WS2801 stripe, a 5V/10A power supply, a RaspberryPi, and the BobLight integration in RaspBMC (this is a nice XBMC distribution for the Pi).

Boblight is a collection of tools for driving lights connected to an external controller.

Its main purpose is to create light effects from an external input, such as a video stream (desktop capture, video player, tv card), an audio stream (jack, alsa), or user input (lirc, http). Boblight uses a client/server model, where clients are responsible for translating an external input to light data, and boblightd is responsible for translating the light data into commands for external light controllers.”

The hardware to start with looks like this:

pre_requisites

I’ve fitted some heat-sinks to the Pi since the additional load of controlling 50 LEDs will add a little bit of additional CPU usage which is desperately needed when playing Full HD High-Bitrate content.

The puzzle pieces need to be put together as described by the very good AdaFruit diagram:

diagramAs you can see the Pi is powered directly through the GPIO pins. You’re not going to use the MicroUSB or the USB ports to power the Pi. It’s important that you keep the cables between the Pi and the LEDs as short as possible. When I added longer / unshielded cables everything went flickering. You do not want that – so short cables it is :-)

leds

When you look at aboves picture closely you will find a CO and DO on the PCB of the LED. on the other side of the PCB there’s a CI and DI. Guess what: That means Clock IN and Clock OUT and Data IN and Data OUT. Don’t be mistaken by the adapter cables the LED stripes comes with. My Output socket looked damn close to something I thought was an Input socket. If nothing seems to work on the first trials – you’re holding it wrong! Don’t let the adapters fitted by the manufacturer mislead you.

Depending on the manufacturer of your particular LED stripe there are layouts different from the above image possible. Since RaspBMC is bundled with Boblight already you want to use something that is compatible with Boblight. Something that allows Boblight to control each LED in color and brightness separately.

I opted for WS2801 equipped LEDs. This pretty much means that each LED sits on it’s own WS2801 chip and that chip takes commands for color and brightness. There are other options as well – I hear that LDP8806 chips also work with Boblight.

My power supply got a little big to beefy – 10 Amps is plenty. I originally planned to have 100 LEDs on that single TV. Each LED at full white brightness would consume 60mA  – which brings us to 6Amps for a 100 – add to that the 2 Amps for the PI and you’re at 8A. So 10A was the choice.

To connect to the Pi GPIO Pins I used simple jumper wires. After a little bit of boblightd compilation on a vanilla Raspbian SD card (how-to here). Please note that with current RaspBMC versions you do not need to compile Boblight yourself – I’ve just taken for debugging purposes as clean Raspbian Image and compiled it myself to do some boblight-constant tests. Boblight-constant is a tool that comes with Boblight which allows you to set all LEDs to one color.

If everything is right, it should look like this:

working_first_timeNow everything depends on how your LED stripes look like and how your TVs backside looks like. I wanted to fit my setup to a 42″ Samsung TV. This one already is fitted with a Ultra-Slim Wall mount which makes it pretty much sitting flat on the wall like a picture. I wanted the LEDs to sit right on the TVs back and I figured that cable channels when cut would do the job pretty nicely.

To get RaspBMC working with your setup the only things you need to do are:

  1. Enable Boblight support in the Applications / RaspBMC tool
  2. Login to your RaspBMC Pi through SSH with the user pi password raspberry and copy your boblight.conf file to /etc/boblight.conf.

The configuration file can be obtained from the various tutorials that deal with the boblight configuration. You can choose the hard way to create a configuration or a rather easy one by using the boblight configuration tool.

I’ve used the tool :-)

Boblight Config ToolNow if everything went right you don’t have flickering, the TV is on the wall and you can watch movies and what-not with beautiful light effects around your TV screen. If you need to test your set-up to tweak it a bit more, go with this or this.

result_1

Source 1: http://en.wikipedia.org/wiki/Ambilight
Source 2: http://www.raspberrypi.org/
Source 3: https://code.google.com/p/boblight/
Source 4: http://www.raspbmc.com/
Source 5: http://learn.adafruit.com/light-painting-with-raspberry-pi/hardware
Source 6: How-To-Compile-Boblight
Source 7: Boblight Config Generator
Source 8: Boblight Windows Config Creation Tool
Source 9: Test-Video 1
Source 10: Test-Video 2

Instruction-less computing: Doing stuff with a CPU without actually executing instructions

Having fun with hardware is a good way to learn about the machines which soon will become our new overlords. With this pretty interesting presentation you can dive deep into what a CPU does and how it can be exploited to run code by not running it.

Trust Analysis, i.e. determining that a system will not execute some class of computations, typically assumes that all computation is captured by an instruction trace. We show that powerful computation on x86 processors is possible without executing any CPU instructions. We demonstrate a Turing-complete execution environment driven solely by the IA32 architecture’s interrupt handling and memory translation tables, in which the processor is trapped in a series of page faults and double faults, without ever successfully dispatching any instructions. The “hard-wired” logic of handling these faults is used to perform arithmetic and logic primitives, as well as memory reads and writes. This mechanism can also perform branches and loops if the memory is set up and mapped just right. We discuss the lessons of this execution model for future trustworthy architectures.

Bildschirmfoto 2013-11-02 um 01.04.31

Source 1: https://www.usenix.org/conference/woot13/page-fault-weird-machine-lessons-instruction-less-computation

when DVB-T is not interesting, use the hardware for fun and SDR!

SDR – or Software Defined Radio is relatively cheap and fun way to dive deeper into radio communication.

“Software-defined radio (SDR) is a radio communication system where components that have been typically implemented in hardware (e.g. mixers, filters, amplifiers, modulators/demodulators, detectors, etc.) are instead implemented by means of software on a personal computer or embedded system. While the concept of SDR is not new, the rapidly evolving capabilities of digital electronics render practical many processes which used to be only theoretically possible.” (Wikipedia)

So with cheap hardware it’s possible to receive radio transmissions on all sorts of frequencies and modulations. Since everything after the actual “receiving stuff”-phase happens in software the things you can do are sort of limitless.

Now what about the relatively cheap factor? – The hardware you’re going to need to start with this is a DVB-T USB stick widely available for about 25 Euro. The important feature you’re going to look for is that it comes with a Realtek RTL2832U chip.

“The RTL2832U is a high-performance DVB-T COFDM demodulator that supports a USB 2.0 interface. The RTL2832U complies with NorDig Unified 1.0.3, D-Book 5.0, and EN300 744 (ETSI Specification). It supports 2K or 8K mode with 6, 7, and 8MHz bandwidth. Modulation parameters, e.g., code rate, and guard interval, are automatically detected.

The RTL2832U supports tuners at IF (Intermediate Frequency, 36.125MHz), low-IF (4.57MHz), or Zero-IF output using a 28.8MHz crystal, and includes FM/DAB/DAB+ Radio Support. Embedded with an advanced ADC (Analog-to-Digital Converter), the RTL2832U features high stability in portable reception.” (RealTek)

You’ll find this chip in all sorts of cheap DVB-T USB sticks like this one:

3948543_b6f7670bc7To use the hardware directly you can use open source software which comes pre-packaged with several important/widely used demodulator moduls like AM/FM. Gqrx SDR is available for all sorts of operating systems and comes with a nice user interface to control your SDR hardware.

The neat idea about SDR is that you, depending on the capabilities of your SDR hardware, are not only tuned into one specific frequency but a whole spectrum several Mhz wide. With my device I get roughly a full 2 Mhz wide spectrum out of the device allowing me to see several FM stations on one spectrum diagram and tune into them individually using the demodulators:

Bildschirmfoto 2013-11-01 um 23.28.56The above screenshot shows the OS X version of Gqrx tuned into an FM station. You can clearly see the 3 stations that I can receive in that Mhz range. One very strong signal, one very weak and one sort of in the middle. By just clicking there the SDR tool decodes this portion of the data stream / spectrum and you can listen to a FM radio station.

Of course – since those DVB-T sticks come with a wide spectrum useable – mine comes with an Elonics E4000 tuner which allows me to receive – more or less useable – 53 Mhz to 2188 Mhz (with a gap from 1095 to 1248 Mhz).

Whatever your hardware can do can be tested by using the rtl_test tool:

root@berry:~# rtl_test -t
Found 1 device(s):
0:  Terratec T Stick PLUS

Using device 0: Terratec T Stick PLUS
Found Elonics E4000 tuner
Supported gain values (14): -1.0 1.5 4.0 6.5 9.0 11.5 14.0 16.5 19.0 21.5 24.0 29.0 34.0 42.0
Benchmarking E4000 PLL…
[E4K] PLL not locked for 52000000 Hz!
[E4K] PLL not locked for 2189000000 Hz!
[E4K] PLL not locked for 1095000000 Hz!
[E4K] PLL not locked for 1248000000 Hz!
E4K range: 53 to 2188 MHz
E4K L-band gap: 1095 to 1248 MHz

Interestingly when you plug the USB stick into an Raspberry Pi and you follow some instructions you can use the Raspberry Pi as an SDR server allowing you to place it on the attic while still sitting comfortably at your computer downstairs to have better reception.

If you want to upgrade your experience with more professional hardware – and in fact if you got a sender license – you can take a look at the HackRF project which currently is creating a highly sophisticated SDR hardware+software solution:

jawbreaker-fd0-145436

Source 1: http://www.realtek.com.tw/products/productsView.aspx?Langid=1&PFid=35&Level=4&Conn=3&ProdID=257
Source 2: http://gqrx.dk/
Source 3: www.hamradioscience.com/raspberry-pi-as-remote-server-for-rtl2832u-sdr/
Source 4: http://ossmann.blogspot.de/2012/06/introducing-hackrf.html
Source 5: https://github.com/mossmann/hackrf

the Panic Status Board is here!

Last year in June I wrote about the concept of a ubiquitous status display of the business in every office. Especially for development and operations it’s pretty important to have important measurements, status codes and project information in front of them all the time.

Back then I already wrote about the Panic status board which gives a great looking example of a status display. Now there is a software from the company Panic which offers anyone the ability to create such a status board. It’s for iOS and looks awesome!

Bildschirmfoto 2013-05-04 um 19.56.56

Source 1: Mirror, Mirror on the wallSource 2: http://panic.com/statusboard/

my home is my castle – CastleOS: the home automation operating system

And once again some smart people put their heads together and came up with something that will revolutionize your world. Well it’s ‘just’ home automation but indeed it looks very very promising. Especially the human-machine interface through speech recognition. First of all let’s start with a short introductory video:

“CastleOS is an integrated software suite for controlling the automation equipment in your home – an operating system for your castle, if you will. The first piece of the suite is what we call the “Core Service” – it acts as the central controller for the whole system. This runs on any relatively recent Windows computer (or more specifically, the computer that has an Insteon PLM or USB stick plugged in to it), and creates a network connection to both your home automation devices, and the second piece of the integrated suite – the remote access apps like the HTML5 app, Kinect voice control app, and future Android/iOS apps.” (from the CastleOS page)

So it’s said to be an all-in-one system that controls power-outlets and devices through it’s core service and offering the option to add Kinect based speech recognition to say things like “Computer, Lights!”.

Unfortunately it comes with quite high and hard requirements when it comes to hardware it’s compatible with. A kinect possible exists in your household but I doubt that you got the Insteon hardware to control out devices with.

That seems to be the main problem of all current home automation solutions – you just have to have the according hardware to use them. It’s not quite possible to use anything and everything in a standardized way. Maybe it’s time to have a “home plug’n’play” specification set-up for all hard- and software vendors to follow?

Source 1: http://www.castleos.com

h.a.c.s. html5 user interface re-implemented

Slow is the right word to describe my html and javascript learning-by-doing progress right now. I have chosen the h.a.c.s. user interface as a valid project to learn html and javascript up to a point where I can start to write useable websites with it. The h.a.c.s. ui seemed to be a good choice because it’s at the moment only used by my family and they are a bunch of battle-proven beta testers.

So first a small video to get an idea what I am implementing right now:

So all you can see is SVG and HTML rendered stuff – made with the help of awesome javascript libraries, as there are:

  • jQuery
    • for the basic javascript coverage
  • Raphaël
    • to draw svg in a human-controllable
  • JustGage
    • to draw those nice gauges
  • OdoMeter
    • an animated HTML5 canvas odometer

I plan to add a lot more – like for swiping gestures. So this will be – just like h.a.c.s – a continuous project. Since I switched to OS X entirely at home I use the great Coda2 to write and debug the code. It helps a lot to have two browser set-up because for some reason I still not feel that well with the WebKit Web Inspector.

Bildschirmfoto 2013-01-06 um 20.47.22

Another great feature of Coda2 is the AirPreview – which means it will preview your current page in the editor on an iOS device running DietCoda – oh how I love those automations.

So I reached the first goal set for myself for the user interface: It’s doing the things the old UI did and it’s maintainable in addition. I am still struggling with javascript here and there – mainly because the debugging and tracing is oh-so-difficult (or I am to slow understanding).

If you got any recommendation for a javascript editor that can handle multiple includes and debugging (step-by-step, …) and good tracing for events please comment!

Source 1: jQuery
Source 2: Raphaël
Source 3: JustGage
Source 4: OdoMeter

know your numbers!

Wikipedia describes latency this way:

“Latency is a measure of time delay experienced in a system, the precise definition of which depends on the system and the time being measured. In communications, the lower limit of latency is determined by the medium being used for communications. In reliable two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is “in-flight” at any one moment. In the field of human-machine interaction, perceptible latency has a strong effect on user satisfaction and usability.” (Wikipedia)

Given that it’s quite important for any developer to know his numbers. Since latency has a huge impact on how software should be architected it’s important to keep that in mind:

 

Bildschirmfoto 2012-12-25 um 21.28.20

 

Source: http://www.eecs.berkeley.edu/~rcs/research/interactive_latency.html