Time estimation in software development

I’ve found myself in these spots several times in my life. Either I had to deliver on an estimate or I had to acknowledge an estimation and deal with the outcomes.

If you are involved in anything digital / software this is a recommended piece to read:

Anyone who built software for a while knows that estimating how long something is going to take is hard. It’s hard to come up with an unbiased estimate of how long something will take, when fundamentally the work in itself is about solving something. One pet theory I’ve had for a really long time, is that some of this is really just a statistical artifact.

Why software projects take longer than you think

blocking ads and promotions on twitter

When a group of people with the same problem meets, they work together and sometimes do an experiment.

Nobody likes ads or “promotional content”.

At some point Twitter chose to push ads in the official Twitter client into every timeline and decided to make them look like normal timeline content.

It did not take long for a group of people that do not like that to meet and join forces: Since about a week now a very small group of people has taken their Twitter block lists and merged them using the Block Together service.

This experiment great since it’s completely effortless. You link your block lists once and from thereon you keep using Twitter like you always did. Whenever you see a paid promotion you “block it”. Everybody from thereon will not see promotions and timeline entries from this specific Twitter user (unless you would actively follow them).

17326 accounts blocked! Wow! I started with about 3500 before merging with others.

And the effect after about a week is just great! I cannot see a downside so far but the amount of promotion content on my timeline has shrunk to a degree where I do not see any at all.

This is a great way to get rid of content you’ve never wanted and focus on the information you want.

Twitter Blocklists

My usual twitter use looks like this: I am scrolling through the timeline reading up things and I see an ad. I click block and never again will I see anything from this advertiser. As I’ve written here earlier.

As Twitter is also a place of very disturbing content there are numerous services built around the official block list functionality. One of those services is “Block Together“.

Block Together is designed to reduce the burden of blocking when many accounts are attacking you, or when a few accounts are attacking many people in your community. It uses the Twitter API. If you choose to share your list of blocks, your friends can subscribe to your list so that when you block an account, they block that account automatically. You can also use Block Together without sharing your blocks with anyone.

blocktogether.org

I’ve signed up and apparently this is as easy as it gets when you want to share block lists.

There seem to be more people that use Twitter like I do. For example Volker Weber wrote about his handling of “promoted content”.

My block list on Twitter currently includes 1881 accounts and these are only accounts that put paid promotions without my request into my timeline.

I’ve read that Volker has such a long list as well – maybe it’s worth sharing as Volker is one person I would trust on his decisions for such a list. (vowe is a good mother!)

bringing the thinclient back

I had to solve a problem. The problem was that I did not wanted to have the exact same session and screen shared across different work places/locations simultaneously. From looking at the same screen from a different floor to have the option to just walk over to the lab-desk solder some circuits together and have the very same documents opened already and set on the screens over there.

One option was to use a tablet or notebook and carry it around. But this would not solve the need to have the screen content displayed on several screens simultaneously.

Also I did not want to rely on the computing power of a notebook / tablet alone. Of course those would get more powerful over time. But each step would mean I would have to purchase a new one.

Then in a move of desperation I remembered the “old days” when ThinClients used to be the new-kid in town. And then I tried something:

I just recently had moved all house server infrastructure over to Linux and Docker. So what would keep me from utilizing the computing power of that one beefy server in the basement to host all of my desktop needs?

It turns out: Nothing really. Docker is well prepared to host desktop environments. With a bit of tweaking and TigerVNC Xvnc I was able to pre-configure the most current Ubuntu to start my preferred Mate desktop environment in a container and expose it through VNC.

If you wanted to replicate this I would recommend this repository as a starting point.

Even better I found that the RaspberryPi single board computers come with a free pre-licensed and accelerated version of RealVNC.

So I took one of those RaspberryPis, booted up the Raspbian Desktop lite and connected to the dockers VNC port. It all worked just like that.

this is the RaspberryPi client with the windowed docker container VNC session

The screenshot above holds an additional information for you. I wanted sound! Video works smooth up to a certain size of the moving video – after all those RaspberryPis only come with sub Gbit/s wired networking. But to get sound working I had to add some additional steps.

First on the RaspberryPI that you want to output the sound to the speakers you need to install and set-up pulseaudio + paprefs. When you configure it to accept audio over the network you can then configure the client to do so.

In the docker container a simple command would then redirect all audio to the network:

pax11publish -e -S thinclient

Just replace “thinclient” with the ip or hostname of your RaspberryPI. After a restart Chrome started to play audio across the network through the speakers of the ThinClient.

Now all my screens got those RaspberryPIs attached to them and with Docker I can even run as many desktop environments in parallel as I wish. And because VNC does not care about how many connections there are made to one session it means that I can have all workplaces across the house connected to the same screen seeing the same content at the same time.

And yes: The UI and overall feel is silky smooth. And since VNC adapts to some extend to the available bandwidth by changing the quality of the image even across the internet the VNC sessions are very much useable. Given that there’s only 1 port for video and 1 port for audio it’s even possible to tunnel those sessions across to anywhere you might need them.

wireless mesh network

Since AVM has started to offer wireless mesh network capabilities in their products through software updates I started to roll it out in our house.

Wireless mesh networks often consist of mesh clients, mesh routers and gateways. Mobility of nodes is less frequent. If nodes constantly or frequently move, the mesh spends more time updating routes than delivering data. In a wireless mesh network, topology tends to be more static, so that routes computation can converge and delivery of data to their destinations can occur. Hence, this is a low-mobility centralized form of wireless ad hoc network. Also, because it sometimes relies on static nodes to act as gateways, it is not a truly all-wireless ad hoc network.

Wikipedia

With the rather complex physical network structure and above-average number of wireless and wired clients the task wasn’t an easy one.

To give an impression of what is there right now:

So there’s a bit of almost everything. There’s wired connections (1Gbit to most places) and there is wireless connections. There are 5 access points overall of which 4 are just mesh repeaters coordinated by the Fritz!Box mesh-master.

There’s also powerline used for some of the more distant rooms of the mansion. All in all there are 4 powerline connections all of them are above 100 Mbit/s and one even is used for video streaming.

All is managed by a central Fritz!Box and all is well.

Like without issues. Even interesting spanning-tree implementations like from SONOS are being properly routed and have always worked without issues.

The only other-than-default configuration I had made to the Fritz!Box is that all well-known devices have set their v4 IPs to static so they are not frequently switching around the place.

How do I know it works? After enabling the Mesh things started working that have not worked before. Before the Mesh set-up I had several accesspoints independently from each other on the same SSID. Which would lead to hard connection drops if you walked between them. Roaming did not work.

With mesh enabled I’ve not seen this behavior anymore. All is stable even when I move actively between all floors and rooms.

How to get me to actively avoid your products

It is a simple one step process: shove unasked advertising in my face. Bonus points for loud full blast audio right of the start.

If I ever see unasked advertising that tried to be sneaky or not do sneaky I am going to block it without noticing from whom or for what it was.

But when it’s shown so often and is so intrusive that I take note of your brand. That brand is not considered for future business anymore.

That is especially for services where I am the product paying with my data.

Sample 1
blocking
Sample 2

Apple Airplay for SONOS (in Docker)

We’ve got a couple of SONOS based multi-room-audio zones in our house and with the newest generation of SONOS speakers you can get Apple Airplay. Fancy!

But the older hardware does not support Apple Airplay due to it’s limiting hardware. This is too bad.

So once again Docker and OpenSource + Reverse-Engineering come to the rescue.

AirConnect is a small but fancy tool that bridges SONOS and Chromecast to Airplay effortlessly. Just start and be done.

It works a treat and all of a sudden all those SONOS zones become Airplay devices.

There is also a nice dockerized version that I am using.

“making your home smarter” – use case #12 – How much time do I have until…?

Did you notice that most calendars and timers are missing an important feature. Some information that I personally find most interesting to have readily available.

It’s the information about how much time is left until the next appointment is coming up. Even smartwatches, which should should be jack-of-all-trades in regards of time and schedule, do not display the “time until the next event”.

Now I came across this shortcoming when I started to look for this information. No digital assistant can tell me right away how much time until a certain event is left.

But the connected house also is based upon open technologies, so one can add these kind of features easily ourselves. My major use cases for this are (a) focussed work, plan quick work-out breaks and of course making sure there’s enough time left to actually get enough sleep.

As you can see in the picture attached my watch will always show me the hours (or minutes) left until the next event. I use separate calendars for separate displays – so there’s actually one for when I plan to get up and do work-outs.

Having the hours left until something is supposed to happen at a glance – and of course being able to verbally ask through chat or voice in any room of the house how long until the next appointment gives peace of mind :-).

 

“making your home smarter”, use case #11 – money money money

The Internet of Things might as well become your Internet of Money. Some feel the future to be with blockchain related things like BitCoin or Ethereum and they might be right. So long there’s also this huge field of personal finances that impacts our lives allday everyday.

And if you get to think about it money has a lot of touch points throughout all situations of our lifes and so it also impacts the smart home.

Lots of sources of information can be accessed today and can help to stay on top of the things going on as well as make concious decisions and plans for the future. To a large extend the information is even available in realtime.

– cost tracking and reporting
– alerting and goal setting
– consumption and resource management
– like fuel oil (get alerted on price changes, …)
– stock monitoring alerting
– and more advanced even automated trading
– bank account monitoring, in- and outbound transactions
– expectations and planning
– budgetting

After all this is about getting away from lock-in applications and freeing your personal financial data and have a all-over dashboard of transactions, plans and status.

“make your home smarter”, use case #7 – hear that doorbell ringing!

We love music. We love it playing loud across the house. And when we did that in the past we missed some things happening around.

Like that delivery guy ringing the front doorbell and us missing an important delivery.

This happened a lot. UNTIL we retrofitted a little PCB to our doorbell circuit to make the house aware of ringing doorbells.

Now everytime the doorbell rings a couple of things can take place.

– push notifications to all devices, screens, watches – that wakes you up even while doing workouts
– pause all audio and video playback in the house
– take a camera shot of who is in front of the door pushing the doorbell

And: It’s easy to wire up things whatever those may be in the future.

in case of emergency: spoof your MAC address

 

 

There have been several occasions in the past years that I had to quickly change the MAC address of my computer in order to get proper network connectivity. May it be a corporate network that does not allow me to use my notebook in a guest wifi because the original MAC address is “known” or any other possible reasons you can come up with…

Now this is relatively easy on Mac OS X – you can do it with just one line on the shell. But now there’s an App for that. It’s called Spoof:

img

“I made this because changing your MAC address in OS X is harder than it should be. The Wi-Fi card needs to be manually disassociated from any connected networks in order for the change to apply correctly – super annoying! Doing this manually each time is tedious and lame.

Instead, just run spoof and change your MAC address in one command. Now for Linux, too!”

Source: https://github.com/feross/spoof

Nitrous – full IDE in your browser – with Collaboration!

“Nitrous is a backend development platform which helps software developers save time by cutting out the repetitive parts of creating development environments and automating them.

Once you create your first development environment, there are many features which will make development easier.”

Bildschirmfoto 2014-07-06 um 11.38.49

So what you’re getting is:

  • a virtual machine operated for you and set-up with a single click
  • A full-featured IDE in your browser
  • Code-Collaboration by inviting others to edit your project
  • a debugging environment in which you can test-run and work with your code

Here are some screenshots to get you a feel for it:

Source: https://www.nitrous.io/

using the RaspberryPi to make all SONOS speakers support Apple Airplay

Airplay allows you to conveniently play music and videos over the air from your iOS or Mac OS X devices on remote speakers.

Since we just recently “migrated” almost all audio equipment in the house to SONOS multi-room audio we were missing a bit the convenience of just pushing a button on the iPad or iPhones to stream audio from those devices inside the household.

To retrofit the Airplay functionality there are two options I know of:

1: Get Airplay compatible hardware and connect it to a SONOS Input.

airportexpress_2012_back-285111You have to get Airplay hardware (like the Airport Express/Extreme,…) and attach it physically to one of the inputs of your SONOS Set-Up.  Typically you will need a SONOS Play:5 which has an analog input jack.

PLAY5_back

2: Set-Up a RaspberryPi with NodeJS + AirSonos as a software-only solution

You will need a stock RaspberryPi online in your home network. Of course this can run on virtually any other device or hardware that can run NodeJS. For the Pi setting it up is a fairly straight-forward process:

You start with a vanilla Raspbian Image. Update everything with:

sudo apt-get update

sudo apt-get upgrade

Then install NodeJS according to this short tutorial. To set-up the AirSonos software you will need to install additional avahi software. Especially this was needed for my install:

sudo apt-get install git-all libavahi-compat-libdnssd-dev

You then need to get the AirSonos software:

sudo npm install airsonos -g

After some minutes of wait time and hard work by the Pi you will be able to start AirSonos.

sudo airsonos

And it’ll come up with an enumeration of all active rooms.

Screen Shot 2014-06-25 at 11.38.47

And on all your devices it’ll show up like this:

IMG_1046

and

Screen Shot 2014-06-25 at 12.38.27

 

Source: https://github.com/stephen/airsonos

Need to do Load Tests? Try Tsung!

Tsung is an open-source multi-protocol distributed load testing tool

It can be used to stress HTTP, WebDAV, SOAP, PostgreSQL, MySQL, LDAP and Jabber/XMPP servers. Tsung is a free software released under the GPLv2 license.

The purpose of Tsung is to simulate users in order to test the scalability and performance of IP based client/server applications. You can use it to do load and stress testing of your servers. Many protocols have been implemented and tested, and it can be easily extended.

It can be distributed on several client machines and is able to simulate hundreds of thousands of virtual users concurrently (or even millions if you have enough hardware …).

Source 1: http://tsung.erlang-projects.org/

MOSH (Mobile Shell) – fixing SSH for everyone

How many times did you experience a connection loss on your terminal window in the last week? Yeah I know – like everytime you close the lid of your notebook and move to a different place. So like a dozen times every day.

And everytime you reconnect to your servers and you use things like screen to keep your terminals open and your programs running while you’re disconnected.

On the other hand – did you ever curse the internet gods while you tried to do a very important check or bugfix to a machine whilst on a train or mobile roaming network? It’s not what I would call fun-times. When there are no constant disconnects the lag is just infuriating. MOSH also solves this since it’s predicting and responding way faster then vanilla SSH. Your terminal becomes useable again!

So there’s now MOSH to the rescue:

Remote terminal application that allows roaming, supports intermittent connectivity, and provides intelligent local echo and line editing of user keystrokes.
Mosh is a replacement for SSH. It’s more robust and responsive, especially over Wi-Fi, cellular, and long-distance links.
Mosh is free software, available for GNU/Linux, FreeBSD, Solaris, Mac OS X, and Android.

[youtube]http://www.youtube.com/watch?v=XsIxNYl0oyU[/youtube]

Install it on your servers and your clients and never lose a connection again.

Source 1: http://www.gnu.org/software/screen/
Source 2: http://mosh.mit.edu

IPv6 native root server has problems with OpenFire Jabber / XMPP Server to Server

I was setting up a new root-server machine and went for the Debian 7 minimal set-up. Thankfully the root-server provider I am using (hetzner) is connected with IPv4 and IPv6 natively. Awesome stuff!

If you’re using an IPv6 native set-up these days you STILL have to be cautious about possible side-effects with software having bugs and not knowing how to deal with these ginormous ip adresses.

So there’s a well known Jabber / XMPP server that I am using for some years now without any issues. I was even using it on native IPv6 connected machines earlier.

But with the fresh and clean set-up of Debian 7 and IPv6 by the hoster several problems started bubbling up.

1: the ‘there can only be one ipv*’ problem

Turns out that the debian team decided to set a system setting by default that lets IPv6 aware applications bind to IPv6 only. Good thing, you can disable it by adding this to your sysctl.conf:

net.ipv6.bindv6only=0

2: the ‘who resolves first is right’ problem

When you get a IPv6 native machine it might have a resolv.conf consisting of IPv4 and IPv6 name servers. And don’t worry: Everything is going to be all-right as long as the software you’re planning to use is perfectly capably dealing with the answers of both types of servers. The IPv4 ones will default to the A records, the IPv6 ones to the AAAA record.

Now there’s OpenFire. A stable and easy to use XMPP / Jabber server implementation. It’s based upon Java and I am running it with Java 7 on my Debian machine.

Unfortunately in the current 3.9.1 version of OpenFire there’s a bug that leads to Server-to-Server XMPP connections not working when they resolv to IPv6. So for example your Google-Talk contacts won’t work at all.

The bug itself is rather stupid: Seems that OpenFire expects an IPv4 adress from the DNS lookup and crashes on an IPv6 adress.

The solution is as easy as the bug is stupid: Remove the IPv6 defaulting nameservers from your resolv.conf.

# nameserver config
nameserver 2xx.xxx.yyy.99
nameserver 2xx.xxx.yyy.100
nameserver 2xx.xxx.yyy.98
nameserver 8.8.8.8
nameserver 8.8.4.4
#nameserver 2axx:yyy:0:zzzz::add:9898
#nameserver 2axx:yyy:0:zzzz::add:9999
#nameserver 2axx:yyy:0:zzzz::add:1010

Source 1: defaulting to net.ipv6.bindv6only=1
Source 2: http://community.igniterealtime.org/thread/51902

weave your net of things that have internet…ehm – internet of things

node-red-screenshot

The internet of things” is a buzzword used more and more. It means that things around you are connected to the (inter)network and therefore can talk to each other and, when combined, offer fantastic new opportunities.

Yeah right.

So NodeRed is a NodeJS based toolset that allows you to create so called “flows” (see picture above). Those flows determine what reacts and happens when things happen. Fantastic, told you!

Source 1: http://nodered.org/
Source 2: http://en.wikipedia.org/wiki/Internet_of_ThingsSource 3: http://nodejs.org/

“Compressing” JSON to JSON

JSON_logo
JSON Logo

The internet and all those browsers and javascript applications brought data structures that are pretty straight-forward. One of them is JSON.

The wikipedia tells about JSON:

“JSON (/ˈdʒeɪsɒn/ JAY-soun, /ˈdʒeɪsən/ JAY-son), or JavaScript Object Notation, is an open standard format that uses human-readable text to transmit data objects consisting of attribute–value pairs. It is used primarily to transmit data between a server and web application, as an alternative to XML.”

Unfortunately complex JSON can get a bit heavy on the structure itself with over and over repetitions of data-schemes and ids.

There’s RJSON to the rescue on this. It’s backwards compatible and makes your JSON more compressible:

“RJSON converts any JSON data collection into more compact recursive form. Compressed data is still JSON and can be parsed with JSON.parse. RJSON can compress not only homogeneous collections, but also any data sets with free structure.

RJSON is single-pass stream compressor, it extracts data schemes from document, assign each schema unique number and use this number instead of repeating same property names again and again.”

Of course this is all open-source and you can get your hands dirty here.

Source 1: http://en.wikipedia.org/wiki/JSON
Source 2: http://www.cliws.com/e/06pogA9VwXylo_GknPEeFA/
Source 3: https://github.com/dogada/RJSON

IPv6 Migrationsleitfaden für die öffentliche Verwaltung

Die verfügbaren IPv4 Adressen neigen sich dem Ende und IPv6 wird kommen. Da gibt es keinen Zweifel! Dieses Weblog beispielsweise ist seit über zwei Jahren nativ über IPv6 erreichbar. Nun wird es mit jedem Monat der ins Land geht immer ‘brenzliger’ und dementsprechend wichtig ist der Schritt unter anderem auch für die öffentliche Verwaltung. Interessante Einblicke gibt dieses umfangreiche Dokument:

Bildschirmfoto 2013-05-04 um 20.15.28
downloadbares 270 Seiten PDF

“Seit den Anfangstagen des Internets wird zur Übertragung der Daten das Internet Protokoll in der Version 4 (IPv4) verwendet. Heute wird dieses Protokoll überall verwendet auch in den internen Netzen von Behörden und Organisationen. Das Internet und alle Netze, welche IPv4 heute verwenden, stehen vor einem tiefgreifenden technischen Wandel, denn es ist zwingend für alle zum Nachfolger IPv6 zu wechseln.

Auf die oft gestellte Frage, welche wesentlichen Faktoren eine Migration zu IPv6 vorantreiben, gibt es zwei zentrale Antworten:

  • Es gibt einen Migrationszwang der auf die jetzt schon (in Asien) nicht mehr verfügbaren IPv4-Adressen zurückführen ist.
  • Mit dem steigenden Adressbedarf für alle Klein- und Großgeräte, vom Sensor über Smartphones bis zur Waschmaschine, die über IP-Netze kommunizieren müssen verschärft sich das Problem der zur Neige gegangenen IPv4-Adressräume. Das Zusammenkommen beider Faktoren beschleunigt den Antrieb zur IPv6-Migration.

Es wird in Zukunft viele Geräte geben, die nur noch über eine IPv6-Adresse anstatt einer IPv4-Adresse verfügen werden und nur über diese erreichbar sind. Schon heute ist bei den aktuellsten Betriebssystemversionen IPv6 nicht mehr ohne Einschränkungen deaktivierbar. Restliche IPv4-Adressen wird man bei Providern gegen entsprechende Gebühren noch mieten können. Bei einem Providerwechsel im Kontext einer Neuausschreibung von Dienstleistungen wird man diese jedoch nicht mehr ‘mitnehmen’ können. Damit bedeutet eine Migration zu IPv6 nicht nur die garantierte Verfügbarkeit ausreichend vieler IP-Adressen, sondern stellt auch die Erreichbarkeit eigener Dienstleistungen für die Zukunft sicher ohne von einem Anbieter abhängig zu sein.”

Source 1: IPv6 Migrationsleitfaden für die öffentliche Verwaltung
Source 2: IPv6-Best Practice für die öffentliche Verwaltung

a virtual network inside your machine

Did you ever start a horde of virtual machines and a complicated vm-only network set-up just to simulate a medium complex network and the interaction of nodes in that network? Well that’s a tiresome, error-prone and labour intensive process. Fear no more, there’s a tool to the rescue.

“Mininet creates a realistic virtual network, running real kernel, switch and application code, on a single machine (VM, cloud or native), in seconds, with a single command:”

frontpage_diagram

“Because you can easily interact with your network using the Mininet CLI (and API), customize it, share it with others, or deploy it on real hardware, Mininet is useful for development, teaching, and research. Mininet is also a great way to develop, share, and experiment with OpenFlow and Software-Defined Networking systems.

Mininet is actively developed and supported, and is released under a permissive BSD Open Source license. We encourage contribution of code, bug reports/fixes, documentation, and anything else that can improve the system!”

Source: http://mininet.github.com/

new actors to switch power on/off and measure power usage by AVM

Usually the actors that allow you to switch power on/off and who measure power usage use the 434Mhz or 868Mhz wireless bands to communicate with their base station. Now the german manufacturer AVM came up with a solution that allows you to switch on/off (with an actual button on the device itself and wireless!) and to measure the power consumption of the devices connected to it.

The unspectacular it looks the spectacular are the features:

fritz_dect_200_auf_einen_blick

  • switch up to 2300 watts / 10 ampere
  • use different predefined settings to switch on/off or even use Google Calendar to tell it when to switch
  • measure the energy consumption of connected devices
  • it uses the european DECT standard to communicate with a Fritz!Box base station (which is a requirement)

For around 50 Euro it’s quite an investment but maybe I’ll give it a shot – especially the measurement functionality sounds great. Since I do not have one yet I don’t know anything about how to access it through third party software (h.a.c.s.?)

Source 1: www.avm.de/de/News/artikel/2013/start_fritz_dect_200.html
Source 2: www.avm.de/de/Produkte/Smart_Home/FRITZDECT_200/index.php
Source 3: en.wikipedia.org/wiki/Digital_Enhanced_Cordless_Telecommunications

if this than that – simple recipes for home automation

Workflows are important – and having a lot of switching possiblities and even more sensors that measure things it begins to become important to be able to implement workflows behind all that hardware.

It’s nice to be able to switch light on and of when you want to. But isn’t it even better to have some sort of workflow behind all sorts of triggers. Think of the possibilities!

If this then that is a service to help you define very simple workflows:

Want an example?

It knows a lot of ‘this’ and a lot of ‘that’. So give it a try or even better, add your own home automation software as ‘this’ and ‘that’ :-)

Source 1: https://ifttt.com

extending the house storage

In times when mobile phone cameras produce pictures of 2 MBytes each and decent DSLR cameras produce pictures in the range of more than 20 Mbytes each – not speaking of the various sensors around the house the question of how all of this is going to be stored is an interesting one.

Prices for mass storage is dropping for years and sized of hard disks are getting bigger and bigger. 3 Tbyte drives are fairly cheap now. Cheap enough to consider serious redundancy even for home use.

Having that home automation hobby and having very specific needs when it comes to home entertainment or even watching TV (we don’t watch live-tv…) we have a relatively huge demand for storage space. That way we are already storing over 10 Tbyte of data, fully encrypted, redundant and backed-up.

Our file server infrastructure grew with the needs over the years.

It started way back in 2003 when I set-up the first fileserver for my apartment back then. It was a fairly huge 19 inch case with 5 hard disks (100 Gbyte each). This machine was filled in 2005 and needed replacement.

We’re in IDE land back then. Because the system hardware died on me due to a power surge all the disks and a new mainboard were seated in a new case with room for a lot of disks.

One interesting detail might be that I consistently used Windows Server for that purpose.

The machine always wasn’t just a fileserver. It was smtp, imap, nntp and media server all the time. That lead to a growing demand of CPU and memory resources. It started with an 800 Mhz AMD Athlon (which died quickly) and for the next years to come I used a 2.8 Ghz Intel Pentium 4. Everything started with Windows Server 2003 – bought in the Microsoft Store when I was a Microsoft employee.

Diskspace demand kept growing and in 2009 a new case, new mainboard + memory and new disks where due.

Since 2009 a Core4Quad Q9550 with 2.8 Ghz and 16 Gbyte of Memory is the heart of our fileserver. Since we’re frequently live-transcoding video streams to feed iPads and iPhones around the house that machine has plenty of grunt to feed the demand. We can have 2 iPhones and 2 iPads playing 720p content without getting stutters. Back in 2009 we also switched to a mixed IDE and SATA setup as you can see in the picture:

Plenty of room when the new case arrived – it was getting crowded just 2 years later in 2011. Every seat was taken – which means 13 disks are in that case and 1 attached through USB.

That adds up to more than 16 Tbyte of raw storage. In 2011 we also upgraded to Windows Server 2008. We never lost a bit with that operating system, not under the heaviest load and even through serious hardware malfunctions. A lot of disks of those 13 died throughout the years: Almost 1 every 2 months was replaced – most of them through extended waranties – of course we have a spare always ready to take the place. Only one time I had to rush to a store to get a replacement drive when two disks failed short after each other. That’s why there’s that 2 Tbyte drive in the 1.5 Tbyte compound…

So it’s getting full again. Since that case isn’t really holding more disks and replacing them is getting harder because of the tight fit the idea was born to now add a bigger case but to just add a NAS/SAN which holds between 6 to 8 disks at once, comes with it’s own redundancy management and exports one big iSCSI volume.

That said a network card was added to the fileserver and a QNAP TS-859 Pro+ 8-bay appliance was bought. This one is a shiny black device which uses less power then an aditional case with extra cpu and memory would have use and after calculating through a number of combinations it’s even the cheapest solution for an 8 drive set-up.

After some intensive testing it seems that the iSCSI approach is the most robust one. Since I am just done with testing the appliance the next step is to buy drives. So stay tuned!

Source 1: http://www.qnap.com/de/index.php?lang=de&sn=375&c=292&sc=528&t=532&n=3486

HTTP/2 RFC draft is out

Progress is showing in regards of the next incarnation of the famous Hypertext transport protocol aka http. Despite the fact that those 4 letters got banned from modern browser adress bars it’s still the cornerstone of everything your browser does with the network.

Based upon the work of Google and their SPDY implementation it comes with a lot of things that come in handy when thinking about modern demands for security, performance and multi-channel-data-transport.

Source 1: http://tools.ietf.org/id/draft-ietf-httpbis-http2-00.html
Source 2: http://en.wikipedia.org/wiki/SPDY

generate C# classes from JSON data

It’s a common use case: you’ve got some JSON formatted data and you want to interface with it using your favourite programming language C#. You can write the appropriate classes yourself, or you could use the fabulous json2csharp helper page.

Source 1: http://json2csharp.com/
Source 2: http://jsonclassgenerator.codeplex.com/
Source 3: http://json.codeplex.com/

das außer-Haus Backup

Irgendwie werden es auch privat immer immer mehr Daten – mit immer zunehmender Geschwindigkeit… Alle paar Jahre tausche ich bei uns im Haushalt die Festplatten/Speicherlösung komplett aus – was zwar immer wieder mal eine Investitions bedeutet, gleichzeitig aber auch dafür sorgt dass Daten nicht irgendwelchen ungünstigen mechanischen, chemischen oder magnetischen Effekten zum Opfer fallen… Ja so etwa alle zwei Jahre wird alles einmal umkopiert… Das dauerte beim letzten Mal zwar gut eine Woche, aber naja so ist das eben…

Aus vielerlei Grund haben wir auch für einen Haushalt recht viel Bedarf an Speicherplatz – teilweise wohl auch weil meine Frau Photographin ist – aber ich als “werf-nix-weg”-Typ werd da auch einen guten Anteil dran haben…

Herr über alle unsere Festplatten (kein Witz, die Rechner bei uns haben ihre Festplatten eigentlich nur um booten zu können) ist seit jeher ein einzelner Rechner welcher ebenso alle paar Jahre komplett ausgetauscht wird. Dieser Rechner verwaltet im Moment zwischen 12-15 Festplatten verschiedener Größe – Hauptarbeit wird zur Zeit durch drei separate (gewachsene) RAID-5 Volumes erledigt…

Nebenbei: Nein ich kann/will da kein RAID-6 fahren ohne entweder Linux zu verwenden (was aus verschiedenen Gründen nicht geht) oder einen Hardware-Controller zu verwenden, was nach einschlägigen Erfahrungen querbeet durch alle möglichen Hardware RAID Controller ausfällt.

Dem ganzen Festplattenstapel liegt dann ein Standard-PC mit Windows Server 2008 zugrunde – zum einen weil ich so eine Lizenz noch herumliegen hatte und zum anderen weil ich in über 10 Jahren File-Server Erfahrungen sammeln noch nie auch nur ein Byte unter Windows verloren habe. Zusätzlich habe ich einen riesigen Haufen Software welche Windows-only ist ud sozusagen ständig laufen muss um Sinn zu machen (Mail-Server Puffer, Newsserver Mirror, Musik und Video Streaming Server, Medienbibliothek, Videorekorder,…

Diese drei großen RAID Volumes schnappt sich dann Truecrypt und ver- und entschlüsselt zuverlässig vor sich hin – im Endeffekt gibt es kein Byte Daten im Haushalt welches nicht verschlüsselt wäre. Gut für uns.

So ein RAID verhindert nun ja aber nicht dass dennoch oben genannte ungünstige Effekte eintreten und man mal eine oder mehrere Defekte zu beklagen hat. Im Normalfall tauscht man die defekte Festplatte, resynct das RAID und alles funktioniert weiter ohne dass man Daten verloren hätte. Allerdings ist das ja kein Backup. Das ist nur eine erste Absicherung gegen mögliche Defekte.

Getreu folgendem kurzen Musikstück:

RAID ist kein Backup

… ist ein RAID eben kein Backup. Backups erledigt bei mir eine Sammlung von Scripten welche jeweils in festen Abständen Vollbackups und Differenz-Backups erstellt. Da kommt dann ein Haufen 1 Gbyte großer Dateien raus welche dann anschliessend per RSync in mühevoller (und dank funktionierendem QoS unbemerkt) Arbeit außer Haus geschafft werden. Die Komplett-Backups dauern aufgrund der großen Menge einfach ewig lang und lassen sich recht einfach dadurch beschleunigen dass man sozusagen das Backup physisch auf einer externen Festplatte zum Server trägt…die Differenz-Backups sind dann meist immer recht flott durchgelaufen. Speicherplatz im Internet wird ja auch immer billiger und so haben wir auch immer ein gutes Off-Site Backup unserer Daten…

Für Windows gibt es neben den üblichen Cygwin Ports von rsync auch eine gute GUI Version namens DeltaCopy. Das Ding kopiert zuverlässig und auch wenn mal der DSL Router rebootet oder hängt nimmt er selbständig die Kopierarbeit wieder auf sobald Netz wieder verfügbar ist.

Damit DeltaCopy seine Daten irgendwo abladen kann wird auf der Gegenstelle natürlich ein rsync Server vorrausgesetzt. Die Konfiguration eines solchen ist nicht sonderlich kompliziert – im Grunde muss man nur rsync installieren und die rsyncd.conf Datei anpassen. Zusätzlich dazu muss man eine Konfigurationsdatei anlegen in welchem nach dem Schema “Benutzername:Passwort” entsprechend die Nutzeraccounts angegeben werden – das wars eigentlich schon. Rsync ist sehr robust und vor allem auch gut für geringere Bandbreiten geeignet. Wenn sich an einer Datei nur wenige Bytes geändert haben müssen auch nur die geänderten Bytes übertragen werden.

Source 1: http://www.speichergurke.de
Source 2: http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp
Source 3: http://de.wikipedia.org/wiki/Rsync

Shairport – someone reversed an AirPort Express

Low Latency Network Audio was a dream for the past years (see an article of 2005 and 2008) and with AirPlay it’s finally there.

I am using the Apple AirPlay technology for several years now… after it got implemented into iOS it’s just fantastic to have the option to have whatever sound source I want to playing loud and clear in any room I want to…

Okay it’s not quite as sophisticated as the sonos solution regarding the control of multiple music sources in multiple rooms but it get’s the job done in an apartment.

So back to the topic: Apple integrated the AirPlay technology into their wireless base station “AirPort Express”. Basically AirPlay is a piece of software which receives an encrypted audio stream over the network and outputs the stream to the SPDIF or audio jack.

Back in 2005 there already was an emulator of this protocol called “Fairport” but Apple decided to encrypt the AirPlay traffic. This led to the problem that the encryption key was unkown because it’s baked into the AirPort Express firmware. And this is where the good news start:

“My girlfriend moved house, and her Airport Express no longer made it with her wireless access point. I figured it’d be easy to find an ApEx emulator – there are several open source apps out there to play to them. However, I was disappointed to find that Apple used a public-key crypto scheme, and there’s a private key hiding inside the ApEx. So I took it apart (I still have scars from opening the glued case!), dumped the ROM, and reverse engineered the keys out of it.”

So to keep things short: Someone got an AirPort Express, dumped the firmware, extracted the AirPlay encryption keys and wrote an emulator of the AirPlay protocol which uses the key. Voilá!

ShairPort is available in source code on the site of the guy and obviously it’s unsure if Apple will react by changing the encryption key in the future. But for the time being it works as advertised:

I took one of my computers and followed the instructions to update perl, install Macports and then run ShairPort. So when ShairPort is run it looks not as appealing as expected:

Notably  it uses IPv6 to communicate between iTunes and ShairPort… Oh I almost forgot to show how it looks in iTunes:

On another side note: It works on Linux, Windows and Mac OS X :-)

Source 1: Apple AirPlay
Source 2: Sonos
Source 3: Apple AirPort Express
Source 4: ShairPort

winter 2011 hacking project: Home Automation

In the last 10+ years I was fiddling with different home automation concepts. Mostly without broad use cases because at that time no one seemed to be interested in having sensors and actors like crazy at home. In fact not that many people seem to care these days.

Having more and more hardware and software around us creates the use cases for a broader audience people like me have for 10+ years. Mainstream is a bitch for nerds Smiley

That said I found a nice plastic box I want to use in a winter project. This plastic box is called “EzControl XS1”. It comes with several visible and “invisible” interfaces.

The visible and obvious ones are: power, 100 mbit ethernet, sd card slot. So it takes some power and does something on the network. The not so obvious and therefore “invisible” interfaces are the most interesting ones: the EzControl XS1 comes with the ability to send and receive on 433 Mhz and 868 Mhz.

ezcontrol_xs1-200

Yes that are the ranges used by switchable and dimable power sockets, temperature sensor and AMR. The EzControl XS1 is not that cheap (coming at 189 Euros for the base version and additional 65 Euros per upgrade option). I do not own one yet so it’s the plan to acquire at least one and start of with dimable power sockets and add more sensors and actors on the way

One great feature of the EzControl XS1 is the embedded WebServer with which the users application (the one I want to write) can interact using a HTTP/JSON Protocol. Oh dear: Sensor data and Actor control using JSON. How great is that!

There is some example code available (even a proprietary iPad/iPhone client) but since I want to have some custom features I do not currently see to be available in software I am going to write a set of tools which will get and protocol sensor data and run scripts to controls actors. Oh it’ll be all available as open source (license not yet chosen).

P.S.: If some one from Rose+Herleth is reading this and wants to help – send me a test unit Smiley

Source 1: http://www.ezcontrol.de (in german though)
Source 2: http://en.wikipedia.org/wiki/Automatic_meter_reading
Source 3: http://www.ezcontrol.de/content/view/12/31/

Using Windows Deployment Services (WDS) to install Linux over Network (PXE)

Developing software is hard work – especially when you target several operating systems. One task that you have to perform quite often would be to deploy a new installation of an operating system as fast as possible on a test machine.

Doing this with Windows is easy – you can use the Windows Deployment Services to bootstrap Windows onto almost every machine which can boot over ethernet using PXE. Everything needed to make WDS work on a Windows Boot-Image is located on that image. Since it’s that easy I won’t dive into more detail here.

What I want to show in greater detail is how you can use WDS to deploy even Linux over your network.

Step 1: Get PXELINUX

What’s needed to boot Linux over a network is a dedicated PXE Boot Loader. This one is called PXELINUX and can be downloaded here.

“PXELINUX is a SYSLINUX derivative, for booting Linux off a network server, using a network ROM conforming to the Intel PXE (Pre-Execution Environment) specification.”

On the homepage of PXELINUX is also a short tutorial which files you need and where to copy them.

Step 2: Setup WDS with PXELINUX

I suppose you got your WDS Installation up and running and you are able to deploy Windows. If that’s the case you can go to your WDS Server Management Tool and right-click on the server name – in my case “fileserver.sones”. If you select “Properties” in the context menu you would see the properties windows like in the screenshot below:

wds_pxelinux

You have to change the Boot-Loader from the standard Windows BootMgr to the newly downloaded PXELINUX bootloader. Since this bootloader comes with it’s own set of config files you can edit this config file to allow booting into Windows.

Step 3: Edit PXELINUX configuration filewds-pxelinux-2 

The first entry I made into the boot menu of the PXELINUX boot loader is the “Install Windows…” entry. Since the first thing the users will see after booting is the PXELINUX loader menu they need to be able to continue to their Windows Installation. Since this Windows Installation cannot be handled by the PXELINUX loader you have to define a boot menu entry which looks a lot like this:

LABEL wds
MENU LABEL Install Windows…
KERNEL pxeboot.0

To add OpenSuSE to the menu you would add an entry looking like this:

LABEL opensuse
MENU LABEL Install OpenSuSE 11.x
kernel /Linux/opensuse/linux
append initrd=/Linux/opensuse/initrd splash=silent showopts

The paths given in the above entry should be altered according to the paths you’re using in your installation. I took the /Linux/opensuse/ files from the network install dvd images of OpenSuSE.

wds-pxelinux-3

That’s basically everything there is about the installation of Linux (Debian works accordingly) over PXE and WDS.

And finally this is what it should look like if everything worked great:

 

Source 1: http://en.wikipedia.org/wiki/Preboot_Execution_Environment
Source 2: http://syslinux.zytor.com/wiki/index.php/PXELINUX

One step closer to digital nirvana…

Thanks to a podcast I found a great software for my iPhone and iPod touch. It’s a small tool which does cost less than 3 Euro and it’s served by a server tool which runs on Windows and Mac OS X.

It’s called Air Video and it’s frikin’ awesome! ™

What you do is you install the server software and point it to all your directories / drives that might contain video material. You then take your iPhone and install the client app. If you configured the server to be available over the internet you can now connect from anywhere you want using a pass-pin (which is generated) and a password (which is set by you). And by “from anywhere” they mean “anywhere”. WLAN or 3g didn’t make any difference in my test. You start the client, point to a video file and most of the time you are asked if you a) want to directly play is (if the file is ipod-compatible) or b) if you want to live-convert it and play it (when the file isn’t compatible and needs to be re-encoded live for you) or c) if you want to add the file to a conversion queue which will off-line convert the video for you.

In terms of “finding your video” it does look like this:

Air Video

Simple, eh? Taping a video will bring up this screen:

IMG_0388

As I said – Play directoy, Play with Live Conversion and Offline-Conversion-Queue…

It did work with EVERY Video I tried. When I tried Full-HD Movies my serving PC wasn’t able to handle the load but everyhing in SD worked great which is perfect for me.

onwindows

Therefore I can highly recommend this tool – it really does work better than anything I’ve seen before.

Source: http://www.inmethod.com/air-video/index.html

How to unleash the “Virtual WiFi” feature in Windows 7 in C#

Great stuff ahead – this is just the thing I would want to write if it’s not been written already. This tool is free and open source and it’s the perfect workaround for those usual cases when you want to download a podcast in your holiday and your apple branded device tells you “You can only download files up to 10 Megabyte over 3G connections” – You take your notebook, log into 3G, create a WiFi Hotspot with this tool and off you go.

“Over the last week some of you may have heard about Connectify. It’s an app that unleashes the “Virtual WiFi” and Wireless Hosted Network features of Windows 7 to turn a PC into a Wireless Access Point or Hot Spot. Well, I looked into what it would take to build such an app, and it really wasn’t that difficult since Windows 7 has all the API’s built in to do it. After some time of looking things up and referencing the “Wireless Hosted Network” C++ sample within the WIndows 7 SDK, I now have a nice working version of the application to release. I’m calling this project “Virtual Router” since it essentially allows you to host a software based wireless router from your laptop or other PC with a Wifi card. Oh, and did I mention that this is FREE and OPEN SOURCE!”

VirtualRouterScreensshot

“The Wireless Network create/shared with Virtual Router uses WPA2 Encryption, and there is not way to turn off that encryption. This is actually a feature of the Wireless Hosted Network API’s built into Windows 7 and 2008 R2 to ensure the best security possible.
You can give your "virtual" wireless network any name you want, and also set the password to anything. Just make sure the password is at least 8 characters.”

Source: http://virtualrouter.codeplex.com/