“Swagger is a specification and complete framework implementation for describing, producing, consuming, and visualizing RESTful web services. The overarching goal of Swagger is to enable client and documentation systems to update at the same pace as the server. The documentation of methods, parameters, and models are tightly integrated into the server code, allowing APIs to always stay in sync. With Swagger, deploying managing, and using powerful APIs has never been easier.”
The wikipedia tells about JSON:
Unfortunately complex JSON can get a bit heavy on the structure itself with over and over repetitions of data-schemes and ids.
There’s RJSON to the rescue on this. It’s backwards compatible and makes your JSON more compressible:
“RJSON converts any JSON data collection into more compact recursive form. Compressed data is still JSON and can be parsed with
JSON.parse. RJSON can compress not only homogeneous collections, but also any data sets with free structure.
RJSON is single-pass stream compressor, it extracts data schemes from document, assign each schema unique number and use this number instead of repeating same property names again and again.”
Of course this is all open-source and you can get your hands dirty here.
While I am using Xcode a lot lately I quickly got used to one or two keyboard shortcuts that come in handy once every while. This cheat sheet aims at bringing you a lot of shortcuts that are pretty hard to remember if you’re not using them all the time (at least for me).
Since I’ve become sort of an iOS developer lately I had my fair share of WWDC recordings to get started with this whole CocoaTouch and Objective-C development stuff.
Now a tool that is pretty handy is a this website that offers a full-text transcript search of all WWDC recordings. Awesome!
It’s becoming a fashion lately to release the source code of older but legendary commercial products to the public. Now Adobe decided to gift the source code of their flagship product Photoshop in it’s first version from 1990 to the Computer History Museum.
“That first version of Photoshop was written primarily in Pascal for the Apple Macintosh, with some machine language for the underlying Motorola 68000 microprocessor where execution efficiency was important. It wasn’t the effort of a huge team. Thomas said, “For version 1, I was the only engineer, and for version 2, we had two engineers.” While Thomas worked on the base application program, John wrote many of the image-processing plug-ins.”
Since my wife started working as a photographer on a daily basis the daily routine of getting all the pictures off the camera after a long day filled with photo shootings got her bored quickly.
Since we got some RaspberryPis to spare I gave it a try and created a small script which when the Pi gets powered on automatically copies all contents of the attached SD card to the houses storage server. Easy as Pi(e) – so to speak.
So this is now an automated process for a couple of weeks – she comes home, get’s all batteries to their chargers, drops the sd cards into the reader and poweres on the Pi. After it copied everything successfully the Pi sends an eMail with a summary report of what has been done. So far so good – everything is on our backuped storage server then.
Now the problem was that she often does not immediately starts working on the pictures. But she wants to take a closer look without the need to sit in front of a big monitor – like taking a look at her iPad in the kitchen while drinking coffee.
So what we need was a tool that does this:
- take a folder (the automated import folder) and get all images in there, order them by day
- display an overview per day of all pictures taken
- allow to see the fullsized picture if necessary
- work on any mobile or stationary device in the household – preferably html5 responsive design gallery
- it should be fast because commonly over 200 pictures are done per day
- it should be opensource because – well opensource is great – and probably we would need to tweak things a bit
Since I did not find anything near what we had in mind I sat down this afternoon and wrote a tool myself. It’s opensourced and available for you to play with it. Here’s a short description what it does:
It’s pretty fast because it’s not actively resizing the images – instead it’s taking the thumbnail picture from the original jpg file which the camera placed there during storing the picture. It’s got some caching and can be run on any operating system where mono / .net is available – which is probably anything – even the RaspberryPi.
The second edition of the book “Security Engineering” by Ross Anderson is available as a full download. It’s quite a reference and a must-read for anybody with an interest in security (which for example all developers should have).
“When I wrote the first edition, we put the chapters online free after four years and found that this boosted sales of the paper edition. People would find a useful chapter online and then buy the book to have it as a reference. Wiley and I agreed to do the same with the second edition, and now, four years after publication, I am putting all the chapters online for free. Enjoy them – and I hope you’ll buy the paper version to have as a conveient shelf reference.”
Source 1: http://www.cl.cam.ac.uk/~rja14/book.html
Source 1: http://pinterest.com/0x0/webdev/
“A programmer is likely to get just one uninterrupted 2-hour session in a day” is one of the statements this great blog article makes on the matter of interruption of professionals while they do their hard work.
It’s an important thing to understand how that idea to code conversion thing happens. For anyone without that experience: Think of it like being very very concentrated and juggling things. When you get abstracted it’s very likely that you drop something. In the worst case you never even get something to juggle…
If you can stand a little bit of cursing and bad words and if you’re a developer. You should give this site a visit. The commit logs from last night speak for themselves:
It’s been a habbit to ID software to release the source code of their previous games and game engines as open source when time is due. That’s what happened with Doom 3 as well. Since beautiful code appears to a lot of developers it’s just a logical step to analyse the Doom 3 source code with the beauty-aspects in mind.
Now there are two very good examples of such analysis.
Source 1: http://kotaku.com/5975610/the-exceptional-beauty-of-doom-3s-source-code
Source 2: ftp://ftp.idsoftware.com/idstuff/doom3/source/CodeStyleConventions.doc
Source 3: http://fabiensanglard.net/doom3/index.php
Source 4: https://github.com/TTimo/doom3.gpl
And once again some smart people put their heads together and came up with something that will revolutionize your world. Well it’s ‘just’ home automation but indeed it looks very very promising. Especially the human-machine interface through speech recognition. First of all let’s start with a short introductory video:
“CastleOS is an integrated software suite for controlling the automation equipment in your home – an operating system for your castle, if you will. The first piece of the suite is what we call the “Core Service” – it acts as the central controller for the whole system. This runs on any relatively recent Windows computer (or more specifically, the computer that has an Insteon PLM or USB stick plugged in to it), and creates a network connection to both your home automation devices, and the second piece of the integrated suite – the remote access apps like the HTML5 app, Kinect voice control app, and future Android/iOS apps.” (from the CastleOS page)
So it’s said to be an all-in-one system that controls power-outlets and devices through it’s core service and offering the option to add Kinect based speech recognition to say things like “Computer, Lights!”.
Unfortunately it comes with quite high and hard requirements when it comes to hardware it’s compatible with. A kinect possible exists in your household but I doubt that you got the Insteon hardware to control out devices with.
That seems to be the main problem of all current home automation solutions – you just have to have the according hardware to use them. It’s not quite possible to use anything and everything in a standardized way. Maybe it’s time to have a “home plug’n’play” specification set-up for all hard- and software vendors to follow?
Source 1: http://www.castleos.com
So first a small video to get an idea what I am implementing right now:
- to draw svg in a human-controllable
- to draw those nice gauges
- an animated HTML5 canvas odometer
I plan to add a lot more – like for swiping gestures. So this will be – just like h.a.c.s – a continuous project. Since I switched to OS X entirely at home I use the great Coda2 to write and debug the code. It helps a lot to have two browser set-up because for some reason I still not feel that well with the WebKit Web Inspector.
Another great feature of Coda2 is the AirPreview – which means it will preview your current page in the editor on an iOS device running DietCoda – oh how I love those automations.
Back in 2006 I wrote about a new technology which the also new company Geomerics was demoeing.
Back in 2006 everything was just a demo. Now it seems that Geomerics found some very well known customers and without noticing a lot of the current generation games graphics beauty comes from the capabilities real time radiosity lighting is adding to the graphics.
“Geomerics delivers cutting-edge graphics technology to customers in the games and entertainment industries. Geomerics’ Enlighten technology is behind the lighting in best-selling titles including Battlefield 3, Need for Speed: The Run, Eve Online and Quantum Conundrum. Enlighten has been licensed by many of the top developers in the industry, including EA DICE, EA Bioware, THQ, Take 2 and Square Enix.” (Source)
There even is a more updated version of the demo video:
In November 1998 there was a book released about file system design taking the Be File System as the central example.
“This is the new guide to the design and implementation of file systems in general, and the Be File System (BFS) in particular. This book covers all topics related to file systems, going into considerable depth where traditional operating systems books often stop. Advanced topics are covered in detail such as journaling, attributes, indexing and query processing. Built from scratch as a modern 64 bit, journaled file system, BFS is the primary file system for the Be Operating System (BeOS), which was designed for high performance multimedia applications.
You do not have to be a kernel architect or file system engineer to use Practical File System Design. Neither do you have to be a BeOS developer or user. Only basic knowledge of C is required. If you have ever wondered about how file systems work, how to implement one, or want to learn more about the Be File System, this book is all you will need.”
If you’re interested in the matter I definitely recommend reading it – it’s available for free in PDF format and will help to understand what those file system patterns are all about – even in terms of things we still haven’t gotten from our ‘modern filesystems’ today.
Source 1: http://www.nobius.org/~dbg/
This October I had the pleasure to fly to Tokyo for the second time in 2012.
The development unit of Rakuten Japan was hosting the 7th Rakuten Technology Conference in Rakuten Tower 1 in Tokyo.
The schedule was packed with up to 6 tracks in parallel. From research to grass-roots-development a lot of interesting topics.
Knowing how to deal with those personal computers is getting more important by the day. Not everybody needs to know how to write code – but since writing code and making those machines do what you want them to do isn’t as hard as it used to be it’s worth the try!
On the mission to learn to code this page is probably very interesting for anyone wanting to learn:
Source 1: http://www.codecademy.com/#!/exercises/0
It’s a common use case: you’ve got some JSON formatted data and you want to interface with it using your favourite programming language C#. You can write the appropriate classes yourself, or you could use the fabulous json2csharp helper page.
Some more information directly from the readme file:
There are many things which are underestimated when team leads think about their team and possible actions to drive progress.
One of those things is that a team needs information to maintain and gain velocity. You cannot expect everyone to know just out of the blue what is important and in which direction everything is moving. To let everyone know and to develop that direction it’s important to share information as much as possible. It’s important to give everyone access to the information necessary to make a better job.
That’s why we had a build monitor at sones. We had a tool that displayed the current status of our build servers to all developers. Everytime someone committed a change, those build servers got this commit, built it and tested it with automated tests. The status of that could be seen by all developers as things happened.
So within seconds everyone could see if his commit did break something. Even better: Everyone could see. Everyone cared that the build needed to be working, that tests needed to pass. It was everyones job to do the housekeeping. When we switched from Team Foundation Server to GIT and Jenkins this status display needed to be replaced – you could immediately tell that things went from good to not-so-good in terms of build stability and automated testing.
Today I had the opportunity to take a tour of the Thomann logistics center. Standing in the support department I had this in front of me:
There were like 6 big status screens displaying incoming call status of the day, sales figures and other statistics important to those who work there. It’s a very important and integrated way to keep information flowing.
Since I am with Rakuten I thought about having a new status board set-up for my team. Something that might be inspired by the awesome status board which panic has built:
Since in addition to sones there are a lot of more things to track and handle (code, deployment, operations, overall numbers) I think such a status board will be of invaluable worth for the team.
Oh dear. I just thought about the fact that I never really announced or talked about the fact that I changed my employee and moved to a (old) new place.
Yes that’s right, I am not with sones anymore. I am since January 1st the CTO of Rakuten Germany. When I signed the contract the company was called Tradoria – one of the first big projects I had the opportunity to work on was the so called brandchange.
A humongeous japanese based company called Rakuten bought Tradoria in the middle of 2011 and after half a year it was time to switch the brand.
As you can imagine these were busy weeks since January 1st. I had to digest a lot of existing technology and products. I met and got to know a lot of interesting people – first and foremost a great team of developers that went through almost all imagineable pains and parties to come up with a marketplace and shop system that is a perfect base for take-off.
A short word on the business-model of Rakuten – If you’re a merchant you gotta love it: Think of Rakuten as a full service provider for a merchant and customer. You as a Rakuten merchant get all the frontend and backend bliss to present and manage your products and orders. Rakuten takes care of all the nasty bits and pieces like hosting, development, telephone orders, invoicing, payment. The only thing that you as a Rakuten merchant need to do is to put in great products, gather orders and send out packages. Since Rakuten isn’t selling products on it’s own it won’t be competing with the merchants like other marketplace providers do these days.
On top of that Rakuten cares for the merchant and the customer. Just a week after that successful brandchange I attended (and spoke) at the Tradoria Live! 2012. That’s basically the merchant get-together. This year over 500 people attended this one-day conference. Think of it as a hands-on conference with features, plans, summaries of the last year and the upcoming one – every merchant is invited to come and talk to the people in person that work hard everyday to make the marketplace and shop system better.
Just 24 hours later standing on that stage I found myself here:
Yep. That’s Tokyo (東京). After a very long flight we had the chance to attend a all-embracing tokyo tour before the meetings and talks would start for our team. It was an awesome and exhausting week – just about 120 hours later I was back in Germany – I must have slept for two days :-)
Back in germany I had a lot of stuff to learn and work through. We had already moved to a wonderful house near Bamberg – it was pretty much big luck to find it. It’s actually ridiculously huge for a couple and two cats but we love it. Imagine the contrast: moving from an apartment next to a four-lane city street to the countryside just a 15 minute drive away from work with philosophical quietness all around.
Now after about half a year I am well into the process. I met a lot of high profile techies and things seem to take up speed in regards of teamplay in germany and with all the other countries. It’s a bliss to work for a group of companies that actually go through a lot of transitions while transforming from start-ups to an enterprise.
That’s all Rakuten – that’s all on one mission: Shopping is entertainment! Empower the merchants!
Beside all that I even started to learn japanese. ただいま :-)
Yesterday @simcup wrote on twitter about that he is currently downloading the whole Jamendo catalog of Creative Commons music.
Although I already knew Jamendo it never occurred to be to download their whole catalog. Since I am a fan of choice I immediately thought about how I could download the catalog too. Since the only clue was a cryptic uri-like text how to achieve that it suddenly sounded like a great idea to write a universal tool and release it as open-source. This tool should allow users to download the whole catalog and keep their local jamendo mirror in sync with the server. So anytime new artists, albums or tracks are added the user does not need to download them all again.
So the only thing I had as a starting point was that cryptic uri pointing me to something I’ve never heard of called Rythmbox. Turns out that this is a GNOME music player application which has Jamendo integration. After some clueless poking around I decided to take a look at the source of Rythmbox, especially the Jamendo module.
This module is written in python and quite clean to read. And just by looking at the first lines I came across the interesting fact that there is a almost daily updated XML dump of the Jamendo catalog available from Jamendo. Hurray! Since Jamendo wants developers to interact with the platform they decided to put a documentation online which allows anyone to write tools and stream and download tracks. After all the clues I found I finally ended up on this page.
So there are the catalog download, track stream and torrent uris necessary to download the catalog. Now the only thing that is needed is a tool which parses the XML and creates a nice folder structure for us.
Parsing XML in C# (my prefered programming language) is easy. Basically you can use a tool called XSD.exe and let it generate first the XSD from the XML and then ready-to-use C# classes from that XSD.
After doing all that actually reading the whole catalog into a useable form breaks down to just three lines of code:
Isn’t it great how modern frameworks take away the complexity of such tasks. At this point I’ve already parsed the whole catalog into my tool and only wrote three lines of code. The rest was generated automatically for me. The best of all – this also works on non-windows operating systems when you use mono.
When the XML data is parsed and available in a nice data structure it’s easy to iterate through all artists, all albums and all tracks and then download the actual mp3 or ogg. And that’s basically what my tool does. It takes the XML, parses it, and downloads. It will check before downloading if the track already exists and will only download those added since the last run.
Additionally since I am deeply involved into the development of the GraphDB graph database at sones I want to make use of the Jamendo data and the graph structure it poses. Since the directory structure my tool is generating is only one aspect how you could possibly look at the data it’s quite interesting to demonstrate the capabilities of GraphDB based on that data.
The idea behind the graph representation of the data is that you could start from almost any starting point imaginable. No matter if you you start from a single track and drill up into genre and artists, or if you start at a location and drill down to tracks.
So what the Downloader does in matters of GraphDB integration is that it outputs a GraphQL script which can be imported into an instance of GraphDB.
The sourcecode of my tool is available on github and released unter the BSD license – feel free to play with it and to contribute.
Source 1: http://www.jamendo.com
Source 2: https://github.com/bietiekay/JAMENDOwnloader
Configuring your favourite Editor on OSX (or Linux, or anywhere else) is important – since nano is my editor of choice I wanted to use it’s syntax highlighting capabilities. Easy as pie as it turned out:
I started with a .nanorc file from this guy and modified it to recognize some of my frequent file-types (like .cs files).
You can download my nanorc.tar – just extract it and put it into your user home directory.
Source 1: http://talk.maemo.org/showthread.php?t=68421
Source 2: http://www.nano-editor.org/dist/v2.2/nano.html#Nanorc-Files
Source 3: nanorc.tar
Es ist ja nun schonwieder einige Zeit her dass ich etwas über meine CB-Funk Software namens “FFN-Switcher” geschrieben habe. Nun ist es immerhin mal wieder soweit dass ich zeit gefunden habe mich mit einigen Bugfixes zu beschäftigen.
Gleichzeitig habe ich den Sourcecode von meinem privaten Subversion Repository auf den öffentlich zugänglichen GitHub Dienst hochgeladen. Dort kann der Sourcecode und was noch viel wichtiger ist: die Bug- und Wunschliste abgerufen und editiert werden.
Natürlich gab es in der Zwischenzeit auch einige Bugfixes. Sodass mittlerweile Version 111 online steht und über die automatische Updatefunktion abgerufen werden kann.
After 5 years of TechEd abstinence it’s time to visit the conference again. This years TechEd will be held in Berlin which is quite nice since traveling will be reduced to a minimum. Since the session schedule is already available I’ve already filled my calendar for TechEd week.
Okay it’s impressive to see that so many interesting sessions can be held in one week’ – the bad thing is that I need do decide which to go and which to watch on video later.
On later notice: Since I will be there it would be a great opportunity to meet. Let me know if you are there and want to meet.
Hurray! Finally the 2.8 version of Mono – the platform independent open source .NET framework is available as of today. I finally don’t have to recompile the trunk every now and then to get my bits running
The Major Highlights according to the release notes are:
- C# 4.0
- Defaults to the 4.0 profile.
- New Garbage Collection engine
- New Frameworks:
- Parallel Framework
- Threadpool exception behavior has changed to match .NET 2.0
- potentially a breaking change for a lot of Mono-only software
- See information below in the "Runtime" section.
- New Microsoft open sourced frameworks bundled:
- Managed Extensibility Framework
- ASP.NET MVC 2
- System.Data.Services.Client (OData client framework)
- Large performance improvements
- LLVM support has graduated to stable
- Use mono-llvm command to run your server loads with the LLVM backend
- Preview of the Generational Garbage Collector
- Version 2.0 of the embedding API
- WCF Routing
- .NET 4.0’s CodeContracts
- Removed the 1.1 profile and various deprecated libraries.
- OpenBSD support integrated
- ASP.NET 4.0
- Mono no longer depends on GLIB
Oh – they even linked my benchmark article.
There’s a great tool available to create impressive visualizations of source code repositories:
“Software projects are displayed by Gource as an animated tree with the root directory of the project at its centre. Directories appear as branches with files as leaves. Developers can be seen working on the tree at the times they contributed to the project.
For my own private code I was using a subversion repository for some time now. Since I am using GIT for some time now at sones and the experience so far was great I decided to port all my public projects to github. GitHub is a public git service which additionally to the source code management offers a great user interface.
So I made the following repositories and sources available on GitHub:
Since we’re at it – we not only took the new Mono garbage collector through it’s paces regarding linear scaling but we also made some interesting measurements when it comes to query performance on the two .NET platform alternatives.
The same data was used as in the last article about the Mono GC. It’s basically a set of 200.000 nodes which hold between 15 to 25 edges to instances of another type of nodes. One INSERT operation means that the starting node and all edges + connected nodes are inserted at once.
We did not use any bulk loading optimizations – we just fed the sones GraphDB with the INSERT queries. We tested on two platforms – on Windows x64 we used the Microsoft .NET Framework and on Linux x64 we used a current Mono 2.7 build which soon will be replaced by the 2.8 release.
After the import was done we started the benchmarking runs. Every run was given a specified time to complete it’s job. The number of queries that were executed within this time window was logged. Each run utilized 10 simultaneously querying clients. Each client executed randomly generated queries with pre-specified complexity.
Not surprisingly both platforms are almost head-to-head in average import times. While Mono starts way faster than .NET the .NET platform is faster at the end with a larger dataset. We also measured the ram consumption on each platform and it turns out that while Mono takes 17 kbyte per complex insert operation on average the Microsoft .NET Framework only seems to take 11 kbyte per complex insert operation.
Let the charts speak for themselves first:
click to enlarge
As you can see on both platforms the sones GraphDB is able to work through more than 2.000 queries per second on average. For the longest running benchmark (1800 seconds) with all the data imported .NET allows us to answer 2.339 queries per second while Mono allows us to answer 1.980 queries per second.
With the new generational garbage collector Mono surely made a great leap forward. It’s impressive to see the progress the Mono team was able to make in the last months regarding performance and memory consumption. We’re already considering Mono an important part of our platform strategy – this new garbage collector and benchmark results are showing us that it’s the right thing to do!
UPDATE: There was a mishap in the “import objects per second” row of the above table.
“Mono is a software platform designed to allow developers to easily create cross platform applications. It is an open source implementation of Microsoft’s .Net Framework based on the ECMA standards for C# and the Common Language Runtime. We feel that by embracing a successful, standardized software platform, we can lower the barriers to producing great applications for Linux.” (Source)
In other words: Mono is the platform which is needed to run the sones GraphDB on any operating system different from Windows. It included the so called “Mono Runtime” which basically is the place where the sones GraphDB “lives” to do it’s work.
Being a runtime is not an easy task. In fact it’s abilities and algorithms take a deep impact on the performance of the application that runs on top of it. When it comes to all things related to memory management the garbage collector is one of the most important parts of the runtime:
“In computer science, garbage collection (GC) is a form of automatic memory management. It is a special case of resource management, in which the limited resource being managed is memory. The garbage collector, or just collector, attempts to reclaim garbage, or memory occupied by objects that are no longer in use by the program. Garbage collection was invented by John McCarthy around 1959 to solve problems in Lisp.” (Source)
The Mono runtime has always used a simple garbage collector implementation called “Boehm-Demers-Weiser conservative garbage collector”. This implementation is mainly known for its simplicity. But as more and more data intensive applications, like the sones GraphDB, started to appear this type of garbage collector wasn’t quite up to the job.
So the Mono team started the development on a Simple Generational Garbage collector whose properties are:
- Two generations.
- Mostly precise scanning (stacks and registers are scanned conservatively).
- Copying minor collector.
- Two major collectors: Copying and Mark&Sweep.
- Per-thread fragments for fast per-thread allocation.
- Uses write barriers to minimize the work done on minor collections.
So what we did was taking the old and the new garbage collector and our GraphDB and let them iterate through an automated test which basically runs 200.000 insert queries which result in more than 3.4 million edges between more than 120.000 objects. The results were impressive when we compared the old mono garbage collector to the new mono-sgen garbage collector.
When we plotted a basic graph of the measurements we got that:
On the x-axis it’s the number of inserts and on the y-axis it’s the time it takes to answer one query. So it’s a great measurement to see how big actually the impact of the garbage collector is on a complex application like the sones GraphDB.
The red curve is the old Boehm-Demers-Weiser conservative garbage collector built into current stable versions of mono. The blue curve is the new SGEN garbage collector which can be used by invoking Mono using the “mono-sgen” command instead of the “mono” command. Since mono-sgen is not included in any stable build yet it’s necessary to build mono from source. We documented how to do that here.
So what are we actually seeing in the chart? We can see that mono-sgen draws a fairly linear line in comparison to the old mono garbage collector. It’s easy to tell why the blue curve is rising – it’s because the number of objects is growing with each millisecond. The blue line is just what we are expecting from a hard working garbage collector. To our surprise the old garbage collector seems to have problems to cope with the number of objects over time. It spikes several times and in the end it even gets worse by spiking all over the place. That’s what we don’t want to see happening anywhere.
The conclusion is that if you are running something that does more than printing out “Hello World” on Mono you surely want to take a look at the new mono-sgen garbage collector. If you’re planning to run the sones GraphDB on Mono we highly recommend to use mono-sgen.