How to design a Transit Map

We’ve all used them. And if they are made well they really make life easier: Transit Maps.

Apart from using transit map art style to visualize a train line transit maps can be applied to a lot of data visualization needs.

Take time to consider everything about your diagram. How thick do you want the route lines to be? Are they touching, or is there a slight gap between them? Are you going to use curves or straight edges where a line changes direction? Consider your station markers – will they be ticks, dots or something else? Think about how you would like to differentiate interchange stations or transit centres as well. Consider the typeface you’re going to use for station names – it should be legible and simple. When you’ve considered all these points, you’ve given yourself a set of rules that you will use to construct the diagram. Every design decision you make should be evaluated against these rules: sometimes, you can break them if needed, but it definitely helps to have them in your head as you work.

Tutorial: How to design a Transit Map

OpenSource drawing: Krita

I just recently learned about Krita. An open source drawing application that allows you to… oh well… do free-hand drawings.

Krita is a FREE and open source painting tool designed for concept artists, illustrators, matte and texture artists, and the VFX industry. Krita has been in development for over 10 years and has had an explosion in growth recently. It offers many common and innovative features to help the amateur and professional alike. See below for some of the highlighted features.

Krita highlights

Taking a look at the gallery yields that I cannot draw. Frustration about that is limited because there’s so much nice drawings to gaze at!

Also this is a multi-platform application. It’s available for Windows, macOS and for Linux.

css font-feature “tnum”

Oh this is so useful for my head-up-display prototype implementation:

This feature replaces numeral glyphs set on glyph-specific (proportional) widths with corresponding glyphs set on uniform (tabular) widths. Note that some fonts may contain tabular figures by default, in which case enabling this feature may not appear to affect the width of glyphs.

tabular figures: tnum

IoP – the internet of pets – predictive maintenance of a cat

In the interesting field of IoT a lot of buzz is made around the predictive maintenance use cases. What is predictive maintenance?

The main promise of predictive maintenance is to allow convenient scheduling of corrective maintenance, and to prevent unexpected equipment failures.

The key is “the right information in the right time”. By knowing which equipment needs maintenance, maintenance work can be better planned (spare parts, people, etc.) and what would have been “unplanned stops” are transformed to shorter and fewer “planned stops”, thus increasing plant availability. Other potential advantages include increased equipment lifetime, increased plant safety, fewer accidents with negative impact on environment, and optimized spare parts handling.

Wikipedia

So in simpler terms: If you can predict that something will break you can repair it before it breaks. This improvse reliability and save costs, even though you repaired something that did not yet need repairs. At least you would be able to reduce inconveniences by repairing/maintaining when it still is easy to be done rather than under stress.

You would probably agree with me that these are a very industry-specific use cases. It’s easy to understand when it is tied to an actual case that happened.

Let me tell you a case that happened here last week. It happened to Leela – a 10 year old white British short hair lady cat with gorgeous blue eyes:

Ever since her sister had developed a severe kidney issue we started to unobtrusively monitor their behavior and vital signs. Simple things like weight, food intake, water intake, movement, regularities (how often x/y/z).

I’ve built hardware to allow us to do that in the most simple and automated way. In the case of getting to know their weight we would simply put the kitty litter box on a heavily modified persons scale. I wrote about that already back int 2016.

When Leela now visits her litter box she is automatically weighed and it’s taken note that she actually used it.

A lot of data is aggregated on this and a lot of things are being done to that data to generate indications of issues and alerts.

This alerted us last weekend that there could be an issue with Leelas health as she was suddenly visiting the litter box a lot more often across the day.

We did not notice anything with Leela. She behaved as she would everyday, but the monitoring did detect something was not right.

What had happened?

The chart shows the hourly average and daily total visits to the litterbox.

On the morning of March 9th Leela already had been to the litter box above average. So much above average that it tripped the alerting system. You can see the faded read area in the top of the graph above showing the alert threshold. The red vertical line was drawn in by me because this was when we got alerted.

Now what? She behaved totally normal just that she went a lot more to the litter box. We where concerned as it matched her sisters behavior so we went through all the checklists with her on what the issue could be.

We monitored her closely and increased the water supplied as well as changed her food so she could fight a potential bladder infection (or worse).

By Monday she did still not behave different to a degree that anyone would have been suspicious. Nevertheless my wife took her to the vet. And of course a bladder infection was diagnosed after all tests run.

She got antibiotics and around Wednesday (13th March) she actually started to behave much like a sick cat would. By then she already was on day 3 of antibiotics and after just one day of presumable pain she was back to fully normal.

Interestingly all of this can be followed up with the monitoring. Even that she must have felt worse on the 13th.

With everything back to normal now it seems that this monitoring has really lead us to a case of “predictive cat maintenance”. We hopefully could prevent a lot of pain with acting quick. Which only was possible through the monitoring in place.

Monitoring pets is seemingly becoming a thing – which lead to my rather funky post title declaration of the “Internet of Pets”. I know about a certain Volker Weber who even wrote in the current c’t magazine about him monitoring his dogs location.

Health is a huge topic for the future of devices and gadgets. Everyone will casually start to have more and more devices in their daily lifes. Unfortunately most of those won’t be under your own control if you do not insist on being in control.

You do not have to build stuff yourself like I did. You only need to make the right purchase decisions according to things important to you. And one of these things on that checklist should be: “am I in full control of the data flow and data storage”.

If you are not. Do not buy!

By coincidence the idea of having the owner of the data in full control of the data itself is central to my current job at MindSphere. With all the buzz and whistles around the Industry IoT platform it all breaks down to keep the actual owner of the data in control and in charge. A story for another post!

proper links when printing out the internet

Cascading Style Sheets or CSS in short are a very powerful tool to control how content is being displayed.

CSS is designed to enable the separation of presentation and content, including layout, colors, and fonts. This separation can improve content accessibility, provide more flexibility and control in the specification of presentation characteristics, enable multiple web pages to share formatting by specifying the relevant CSS in a separate .css file, and reduce complexity and repetition in the structural content.
Separation of formatting and content also makes it feasible to present the same markup page in different styles for different rendering methods, such as on-screen, in print, by voice (via speech-based browser or screen reader), and on Braille-based tactile devices. CSS also has rules for alternate formatting if the content is accessed on a mobile device.

Wikipedia

So with CSS you can differentiate between target audiences. It gives you control over the output being rendered for specific render targets.

I frequently come across content I want to read. And almost as frequently I do not have time for a longer read when I come across interesting content.

My workflow for this is: keeping some to-be-read backlog of PDF files I have printed from websites. These PDF files are automatically synced to various devices and I can read them at a later stage.

What often is frustrating to see: bad the print results of website layouts as these websites have not even thought of the remote option of being printed.

With this blog I want to support any workflow and first and foremost my own. Therefore printing this blog adds some print-audience specifics.

For example the links I am using in the articles are usually inline when you are using a browser. When you’re printing the article those links get converted and are being written out with the text. So you can have them in your print-outs without loosing information.

this is how Google Chrome shows you the print preview…

And the changes you need to apply to any webpage to instantly enable this are very simple as well! Just add this to your page stylesheet:

@media print {
a {
text-decoration: none;
}
a::after {
content: "( " attr(href) " )";
margin-left: 0.2em;
text-decoration: underline;
}
}

a new header

I had redone the header of this blog a while ago but since I was trying around some things on the template I wanted something more dynamic but without any additional dependencies.

So I searched and found:

Tim Holman did a very nice implementation of this “worm generator” with only using the HTML5 canvas tag and some math. I made some very slight changes and integrated it into the header graphic. It will react to your mouse movement and resets if you click anywhere. Give it a go!

fonts for your programming needs

We are looking at our screens more and more time of the day and most of that time we are reading or writing text. Text needs to look pretty for our eyes not to get sore – apart from the obvious “being able to tell what letter that is” there is a big portion of personal taste and preference when it comes to the choice of the font.

Most of the texts I am writing benefit from monospaced fonts.

This blog celebrates monospaced fonts for programming.
So many fonts have popped up in recent years.

programmingfonts.org/about

Of course there’s a nice page available that previews the fonts right in your browser:

when you’re working late: grant your eyes a rest

“Ever notice how people texting at night have that eerie blue glow?

Or wake up ready to write down the Next Great Idea, and get blinded by your computer screen?

During the day, computer screens look good—they’re designed to look like the sun. But, at 9PM, 10PM, or 3AM, you probably shouldn’t be looking at the sun.

f.lux fixes this: it makes the color of your computer’s display adapt to the time of day, warm at night and like sunlight during the day.

It’s even possible that you’re staying up too late because of your computer. You could use f.lux because it makes you sleep better, or you could just use it just because it makes your computer look better.”

Bildschirmfoto 2014-09-27 um 12.58.33

Source: https://justgetflux.com/

The Data Visualisation Catalogue

The Data Visualisation Catalogue is currently an on-going project developed by Severino Ribecca.

Originally, this project was a way for me to develop my own knowledge of data visualisation and create a reference tool for me to use in the future for my own work. However, I thought it would also be useful tool to not only other designers, but also anyone in a field that requires the use of data visualisation regularly (economists, scientists, statisticians etc).

Although there have been a few attempts in the past to catalogue some of the established data visualisation methods, there is no website that is really comprehensive, detailed or helps you decide the right method for your needs.

I will be adding in new visualisation methods, bit-by-bit, as I research each method to find the best way to explain how it works and what it is best suited for.”

Bildschirmfoto 2014-03-29 um 13.11.59

Source 1: http://datavizcatalogue.com/

Automated Picture Tank and Gallery for a photographer

Since my wife started working as a photographer on a daily basis the daily routine of getting all the pictures off the camera after a long day filled with photo shootings got her bored quickly.

Since we got some RaspberryPis to spare I gave it a try and created a small script which when the Pi gets powered on automatically copies all contents of the attached SD card to the houses storage server. Easy as Pi(e) – so to speak.

IMG_2322

So this is now an automated process for a couple of weeks – she comes home, get’s all batteries to their chargers, drops the sd cards into the reader and poweres on the Pi. After it copied everything successfully the Pi sends an eMail with a summary report of what has been done. So far so good – everything is on our backuped storage server then.

Now the problem was that she often does not immediately starts working on the pictures. But she wants to take a closer look without the need to sit in front of a big monitor – like taking a look at her iPad in the kitchen while drinking coffee.

So what we need was a tool that does this:

  • take a folder (the automated import folder) and get all images in there, order them by day
  • display an overview per day of all pictures taken
  • allow to see the fullsized picture if necessary
  • work on any mobile or stationary device in the household – preferably html5 responsive design gallery
  • it should be fast because commonly over 200 pictures are done per day
  • it should be opensource because – well opensource is great – and probably we would need to tweak things a bit

Since I did not find anything near what we had in mind I sat down this afternoon and wrote a tool myself. It’s opensourced and available for you to play with it. Here’s a short description what it does:

It’s called GalleryServer and basically is an embedded http server which takes all .jpg files from a folder (configurable) and offers you some handy tool urls which respons with JSON data for you to work with. I’ve written a very small html user interface with a bit of javascript (using the great html5 kickstart) that allows you to see all available days and get a nice thumbnail overview of each day – when you click on it it opens the full-size image in a new window.

It’s pretty fast because it’s not actively resizing the images – instead it’s taking the thumbnail picture from the original jpg file which the camera placed there during storing the picture. It’s got some caching and can be run on any operating system where mono / .net is available – which is probably anything – even the RaspberryPi.

Source 1: my wifes page
Source 2: 99lime html5 kickstart boilerplate
Source 3: https://github.com/bietiekay/GalleryServer

a good source of all things javascript libraries

Choosing the right javascript library is one of the key elements to create a good prototype in very short times – productive applications even. If you want to get new impressions, hints and links to those javascript libraries that will render your next project a success look no further:

Bildschirmfoto 2013-02-02 um 20.50.49

Source 1: http://pinterest.com/0x0/webdev/

an ode to the beauty of code by the example of the source code of Doom 3

It’s been a habbit to ID software to release the source code of their previous games and game engines as open source when time is due. That’s what happened with Doom 3 as well. Since beautiful code appears to a lot of developers it’s just a logical step to analyse the Doom 3 source code with the beauty-aspects in mind.

Now there are two very good examples of such analysis.

Source 1: http://kotaku.com/5975610/the-exceptional-beauty-of-doom-3s-source-code
Source 2: ftp://ftp.idsoftware.com/idstuff/doom3/source/CodeStyleConventions.doc
Source 3: http://fabiensanglard.net/doom3/index.php
Source 4: https://github.com/TTimo/doom3.gpl

Raspberry Pi gets a camera

The first signs of the upcoming camera board for the raspberry pi are showing. During the Electronica 2012 fair RS showed the board to the public for the first time.

Since it’s going to be a 25 Euro add-on for the Pi the specification is quite impressive. The OmniVision OV5647 is used as the Image Sensor – it’s bigger brother is used in iPhone 4. OmniVision says:

“The OV5647 is OmniVision’s first 5-megapixel CMOS image sensor built on proprietary 1.4-micron OmniBSI™ backside illumination pixel architecture. OmniBSI enables the OV5647 to deliver 5-megapixel photography and high frame rate 720p/60 high-definition (HD) video capture in an industry standard camera module size of 8.5 x 8.5 x ≤5 mm, making it an ideal solution for the main stream mobile phone market.

The superior pixel performance of the OV5647 enables 720p and 1080p HD video at 30 fps with complete user control over formatting and output data transfer. Additionally, the 720p/60 HD video is captured in full field of view (FOV) with 2 x 2 binning to double the sensitivity and improve SNR. The post binning re-sampling filter helps minimize spatial and aliasing artifacts to provide superior image quality.

OmniBSI technology offers significant performance benefits over front-side illumination technology, such as increased sensitivity per unit area, improved quantum efficiency, reduced crosstalk and photo response non-uniformity, which all contribute to significant improvements in image quality and color reproduction. Additionally, OmniVision CMOS image sensors use proprietary sensor technology to improve image quality by reducing or eliminating common lighting/electrical sources of image contamination, such as fixed pattern noise and smearing to produce a clean, fully stable color image.

The low power OV5647 supports a digital video parallel port or high-speed two-lane MIPI interface, and provides full frame, windowed or binned 10-bit images in RAW RGB format. It offers all required automatic image control functions, including automatic exposure control, automatic white balance, automatic band filter, automatic 50/60 Hz luminance detection, and automatic black level calibration.”

That sensor delivers RAW RGB Imagery to the RaspberryPi through the onboard camera connector interface:

this actually is a 14 MPixel test-board and not the final 5 MPixel one…

And the part that impressed me the most is that that 5 Megapixel sensor delivers it’s raw data stream and it gets h264 compressed directly within the GPU of the Raspberry Pi. 30 frames per second 1080p without noticeable CPU load – how does that sound? – Not bad for a 50 Euro setup!

Source 1: First Demo
Source 2: OmniVision OV5647 Color CMOS QSXGA Image Sensor

What happened to: realtime Radiosity lighting

Back in 2006 I wrote about a new technology which the also new company Geomerics was demoeing.

Back in 2006 everything was just a demo. Now it seems that Geomerics found some very well known customers and without noticing a lot of the current generation games graphics beauty comes from the capabilities real time radiosity lighting is adding to the graphics.

“Geomerics delivers cutting-edge graphics technology to customers in the games and entertainment industries. Geomerics’ Enlighten technology is behind the lighting in best-selling titles including Battlefield 3, Need for Speed: The Run, Eve Online and Quantum Conundrum. Enlighten has been licensed by many of the top developers in the industry, including EA DICE, EA Bioware, THQ, Take 2 and Square Enix.” (Source)

There even is a more updated version of the demo video:

Source 1: real time radiosity lighting article from 2006
Source 2: Geomerics Presentations
Source 3: More Geomerics Media

Realtime Video Effects: Time Remap

With todays processing power and the faults of current generation digital video cameras you can have a lot of fun – if you know how:

The above demonstrated effect is called Time Remapping. The description of the video tells us more about the effect itself:

The effect was discovered accidentally by a photographer called Jacques Henri Lartigues at the beginning of the 20th century (in 1912 to be precise). He took a picture of a race car with eliptical deformed tires – an effect caused by the characteristics of the camera he was using.

Source 1: http://vimeo.com/7878518
Source 2: http://en.wikipedia.org/wiki/Jacques_Henri_Lartigue
Source 3: http://bokeh.fr/blog/photographes/la-voiture-deformee-de-jacques-henri-lartigue/

gorgeous minecraft renderings – using opensource and blender

There you are – you’ve spent hundreds of hours, maybe together with friends, in a game called Minecraft. You mined and you crafted. And you built yourself your own world. Out of blocks.

“Minecraft is a game about breaking and placing blocks. At first, people built structures to protect against nocturnal monsters, but as the game grew players worked together to create wonderful, imaginative things.

It can also be about adventuring with friends or watching the sun rise over a blocky ocean. It’s pretty. Brave players battle terrible things in The Nether, which is more scary than pretty. You can also visit a land of mushrooms if it sounds more like your cup of tea.”

Those who haven’t played Minecraft yet – you’re missing out a lot. It’s fun and addictive. It seems pretty dull when you don’t know it. As soon as you got immersed in it you immediately see that it’s a lot bigger and the possibilities are a lot more varying than at first sight.

With all those blocks you can basically build your own world and humongously huge objects. It obviously takes a while in most cases because you (until you start using tools and mods) need to fit each block to the other in order to create those big objects.

So imagine you got your own world and you want to create nice renderings of it to hang on your real-world-appartment walls? You can use a very simple to use and thankfully free (open sourced) tool to do that.

It’s called McObj and it uses blender to render the exported geometry. Get it and send your renderings!

Source 1: https://github.com/quag/mcobj
Source 2: http://quag.imgur.com/minecraft__blender
Source 3: https://minecraft.net/
Source 4: http://www.pcgamer.com/2012/11/29/minecraft-renders-azeroth-and-the-pc-gamer-server/

Sintel:

After the last Open Movie Project “Bug Buck Bunny” – Sintel is the next short movie available for free download. Get it here.

“Sintel” is an independently produced short film, initiated by the Blender Foundation as a means to further improve and validate the free/open source 3D creation suite Blender. With initial funding provided by 1000s of donations via the internet community, it has again proven to be a viable development model for both open 3D technology as for independent animation film.
This 15 minute film has been realized in the studio of the Amsterdam Blender Institute, by an international team of artists and developers. In addition to that, several crucial technical and creative targets have been realized online, by developers and artists and teams all over the world.

“Sintel” commenced in May 2009, with producer Ton Roosendaal establishing a core team consisting of Colin Levy (director), David Revoy (concept art), Martin Lodewijk (story) and Jan Morgenstern (composer). In August script writer Esther Wouda was approached as a consultant, which resulted in her taking the responsibility for the entire screenplay. Esther then worked in close cooperation with Colin, David and Ton to deliver the final script early November. Meanwhile, Colin and David realized the first storyboards.

Based on a public call for artists – with over 150 respondents – the Durian artist team got established in July 2009. They first met in a pre-production week in Amsterdam in August, and all decided to join the project per October 1st. With the final movie budget still unknown, the target then still was to finish the film within 7 months, with a team of 6 artists and 2 developers. At that time the team still had the hopes to be able to realize the script in a 6-8 minute film.

In november, the Netherlands Film Fund approved on a substantial subsidy for Sintel, enough to extend the project to 10 months, with possible 1 or 2 extra artist seats in the final months. It was also by this time that breakdowns and animatic edits showed that the script had to be revised to become more compact, with a story structure using a flashback.

In the months after, Colin’s work on the Director’s Layout – 3D animatic shots – and final designs on the grand finale gradually made the movie longer, from 9 minutes in november, to almost 12 in May. Proper story telling, to absorb an audience with convincing characters and action just takes time!

With the highly anticipated extra funding from the Amsterdam Cinegrid – also funding a 4k resolution version – Ton finally could extend the team with 5 artists and a developer in March 2010. With 14 people the film then was completed for a first screening on July 18th in cinema Studio K in Amsterdam.
Three artists then stayed in Amsterdam working on final shot edits, lighting design, compositing, and on the impressive 2 minute film credits. The movie ended up with a total duration of 14m:48s, 888 seconds!

Watch it now:

Sintel

Source 1: Elephants Dream
Source 2: Big Buck Bunny 
Source 3: Sintel
Source 3: Sintel Download

visualize your source control

There’s a great tool available to create impressive visualizations of source code repositories:

“Software projects are displayed by Gource as an animated tree with the root directory of the project at its centre. Directories appear as branches with files as leaves. Developers can be seen working on the tree at the times they contributed to the project.

Currently there is first party support for Git, Mercurial and Bazaar, and third party (using additional steps) for CVS and SVN. “

 

Source: http://code.google.com/p/gource/

Developing on a Microsoft Surface Table

At sones I am involved in a project that works with a piece of hardware I wanted to work with for about 3 years now: the Microsoft Surface Table.

I was able to play with some tables every now and then but I never had a “business case” which contained a Surface. Now that case just came to us: sones is at the CeBIT fair this year – we were invited by Microsoft Germany to join them and present our cool technology along with theirs.

Since we already had a graph visualisation tool the idea was to bring that tool to Surface and use the platform specific touch controls and gestures.

surface_visualgraph
the VisualGraph application that gave the initial idea

The good news was that it’s easier than thought to develop an application for Surface and all parties are highly committed to the project. The bad news is that we were short on time right from the start: less than 10 days from concept to live presentation isn’t the definition of “comfortable time schedule”. And since we’re currently in the process of development it’s a continueing race.

Thankfully Microsoft is committed to a degree they even made it possible to have two great Surface and WPF ninjas who enable is to get up to speed with the project (thanks to Frank Fischer, Andrea Kohlbauer-Hug, Rainer Nasch and Denis Bauer, you guys rock!).

surface_simulator
a Surface simulator

I was able to convice UID to jump in and contribute their designing and user interface knowledge to our little project (thanks to Franz Koller and Cristian Acevedo).

During the process of development I made some pictures which will be used here and there promoting the demonstration. To give you an idea of the progress we made here’s a before and after picture:

Surface_Finger2
We started with a simple port of VisualGraph to the surface table…

Surface_Finger
…and had something better working and looking at the end of that day.

I think everyone did a great job so far and will continue to do so – a lot work to be done till CeBIT! :-)

Source 1: http://www.sones.com
Source 2: http://www.microsoft.de
Source 3: http://www.uid.com/

draw Sequence Diagrams by writing them on a website

Since we are developers we do need tools to note and draw what we think would solve the problems of this planet.

One way to draw a sequence of actions would be a sequence diagram. There are a nbumber of tools to draw them but now I came across a web service that would allow me to write my sequence diagram in a easy textual representation and then it draws the diagram for me. Great stuff!

webseqdiagram

Source 1: http://en.wikipedia.org/wiki/Sequence_diagram
Source 2: http://websequencediagrams.com/

So what exactly is Microsoft Research doing?

I am proud to anounce that there’s a video publicly available which shows parts and projects Microsoft Research is working on currently. It’s great to see theses projects, concepts and ideas become publicly available one by one:

“Craig Mundie, chief research and strategy officer of Microsoft, presents “Rethinking Computing,” a look a how software and information technology can help solve the most pressing global challenges we face today. Part of UW’s Computer Science and Engineering’s Distinguished Lecture Series, Mundie demonstrates a number of current and future-looking technologies that show how computer science is changing scientific exploration and discovery in exciting ways. He discusses the role of new science in solving the global energy crisis, and answer questions from the audience.”

uwtv

Source: http://www.uwtv.org/programs/displayevent.aspx?rID=30363&fID=6021

If you ever needed Box-Shots of your product for a presentation…

If you – like us – need a picture of a shiny product box of a soon-to-be-released product for your presentation you may want to consider buying several tools to create such shots. But you can also just use a small tool and Windows Presentation Foundation.

There’s a great article on CodeProject where a almost everything is pre-set-up for our needs. And everything is written in C# – great stuff!

In action it looks like this:

sones-boxshot

Source: http://www.codeproject.com/KB/WPF/BoxShot.aspx?display=Print

Useful tools: OnTopReplica

Normally I am using a notebook and a 24” Widescreen TFT as a Dual-Monitor solution. In fact I am mostly using the 24” TFT for work and the notebook 14” TFT for all the things that don’t need to be in focus right now like Instant Messengers.

Now in those few cases when a video needs to be played I want it on the main monitor but I want it to take as little of space as possible. And I want it On Top of everything else… maybe sometimes I even want to control it’s opacity a bit…

Now there’s this cool tool called “OnTopReplica” – It’s available for free on Codeplex and works out of the box without installation.

After you start it you’ll end with a small glas window where you can right-click to get a menu. You choose a Window which needs to be replaces – for example the YouTube Browser Window. After that you can even control which region of this Window should be displayed. You can resize, move and of course control the opacity of this window.

ontopreplica 

It’s also great for presentations because it allows you to simply resize any window you like. It will resize it and while it does that the window always is “live” – so everything you’re doing in the original window will be displayed in the replica.

ontopreplica2 

Source: http://ontopreplica.codeplex.com/

The Samsung UE46B6000 + Apple Mac Mini + Plex

So it’s been some days with the new Mediacenter Setup. And all I can say is: Oh boy that is some serious cool setup. I wouldn’t want to chance anything beside adding a new Sound System (>5.1 FTW!).

The Display itself is thinner than thought:

IMG_5394

I strongly recommend the Mac + Plex + Full HD display setup. Even if you don’t get any HD content from your cable provider you can live-stream or download HD content through the different provider plugins inside Plex. The plugin infrastructure with the built-in “App Store” is just great.

Since Plex is a XBMC based Mediacenter software you have tons of information scrapers regarding series and movies. So you’re eventually huge collection gets indexed and presented in a way you would not get from any other Mediacenter. You get pictures, movie posters, descriptions and many more just by automatic indexing your collection.

IMG_5404_2

MST3k FTW!

Needless to say that HD content is something different. I only had some HD content on normal computer displays in the last years – having it now huge and sharp is different – better.

BTW: It’s on the floor right now because my wife couldn’t decide until now which tv-stand would suffice…

Source 1: http://av.samsung.de/produkte/detail2_main.aspx?guid=b6c1306c-f57d-4ce7-a944-56cc7346ed2e
Source 2: http://www.apple.com/macmini/
Source 3: http://plexapp.com/