Thanks for linking to me. And you kotaku readers: grab the RSS feed and keep on reading this blog!
“Microsoft Portrait is a research prototype for mobile video communication. It supports .NET Messenger Service, Session Initiation Protocol and Internet Locator Service on PCs, Pocket PCs, Handheld PCs and Smartphone. It runs on local area networks, dialup networks and even wireless networks with bandwidths as low as 9.6 kilobits/second. Microsoft Portrait delivers portrait-like video if users are in low bandwidths and displays full-color video if users are in broadband. In low bandwidths, portrait video possesses clearer shape, smoother motion, shorter latency and much cheaper computational cost than do conventional video technologies. Microsoft Portrait pursues providing presence notification, chat/voice/video functions anytime, anywhere, on any device.”
I wrote quite a lot of code for the 23rd Chaos Communication Congress. And because of that I want to make it publicy available for everyone to download and use. It’s all GPL (because of the libaries used) so use it according to the license.
You can learn how to:
deserialize the pentabarf schedule.en.xml file
create a valid congress filename
create and manipulate animations with text and bitmaps
store those animations in AVI container files
Because the small aquarium was free we decided to buy a siamese fighting fish. So let me introduce “Boris”:
So far he seems to be okay and healthy after the transport… to get updated information in the future just take a look at blueturtles or if you want some info about the species right now, take a look here.
Und wie versprochen gibt es hier nun die Slides des .NET Compact Framework Vortrags (Version 1.0 und 2.0)
The planning process started way earlier than last year but as usual some things remain until the very last minute before everything starts. But since this year the teamwork and enthusiasm was extraordinary we finally made it almost as planned. We surely did not reach our goal of releasing the recordings hours after the talk ended. Mainly because we underestimated the amount of knowledge and pain it took to actually get the recordings running on the iPod. It was one of the guidelines for this years official recordings: they had to run on current generation video iPods and they had to have all the metatags. So the team did it and we ended up releasing the first half (nearly) of the 23c3 recordings into the public only 2 weeks instead of the 4-6 months of last year. Even better: we managed to improve the video quality and even got smaller files. That wouldn’t been possible without the encoding-pipeline knowledge that Michael Feiri brought into the team. With that knowledge it also took several days to actually build the working pipeline.(yes the iPod is one special piece of hardware) The complete encoding-pipeline we used will be documented and released soon.
So after all that planning we finally packed our stuff and hit the road:
After some hours and the unpacking the video studio looked like this:
here’s a close up of the Windows Media Encoders, the h.264 encoding machine and the storage (all from behind):
So we surely brought enough processing power to berlin. And this is what did take the picture:
Want to see what it’s like watching “out of the window” of the video studio? No problem:
or how about another view of the studio:
So. So you had some pictures of the video studio and the setup. But I bet you want to know some more details about the setup itself. I created an overview for you:
Since last years setup was completely digital (planned) and we ended up using the DV-tape backup since all the recordings were screwed up we thought it might be a good idea to go back one step and use analog video as in FBAS to transport and record the talks. We also had the DV-tape backup this year and to be honest in some cases we have to fall back to that. Less than 10 out of 130 recordings are screwed up so we have to use the DV backup. That means 120 of them worked out as planned. Great! We are currently cutting and encoding them and as you read this more than half of them should be up on the official servers and the mirrors.
Now in the aftermath of the congress we learned a new lesson: there’s maybe one or another speaker that would not allow us releasing their talk recording. In the future we think of having something like a “don’t record me”-list to avoid misunderstandings.
Heute um 18 Uhr findet das zweite .NET Usergroup Treffen in Ilmenau (Campus, Haus F, Rechnerlabor) statt.
- Begrüßung, Neuigkeiten bzgl. der Usergroup (Nico Orschel, Microsoft Student Partner)
- .NET Compact Framework (Daniel Kirstenpfad, Microsoft Senior Student Partner)
- Mobiles Web mit ASP.NET 2.0 (Nico Orschel, Microsoft Student Partner)
- Networking und gemütlicher Ausklang des Treffens
Die Teilnahme am Treffen ist kostenlos, unverbindlich und nicht anmeldepflichtig.
Das ganze kann man auch nochmal auf www.dotnetcommunity.de nachlesen. Dort und hier wird es dann auch nach der Veranstaltung die Slides geben.
That’s what I call a repair:
“A while ago, a 700 MHz iBook was given to me with an infamous video-problem. An iBook which boots, but gives no output, neither to it’s own display nor to a hooked up external monitor.”
The first 53 recordings of the 23c3 are currently uploaded to the server. The remaining ones will be available in the next few days.
You have the following choices:
Because of that major hardware fault on the first day of 23c3 I was not able to blog about the things that happened at the congress. I am going to catch up on everything soon.
In the meantime take this information: The recordings are made. The majority of the recordings is working as planned and is going to be encoded and released in the next hours/days. There are some lectures where something went wrong, but since we had a great backup strategy nothing is lost and will be put online later.
One question came up frequently: What format and codec will the final official recordings have?
Here is the answer:
Video: h.264, 640×480, x264-Parameter: –no-cabac –level=30 –subme=7 –me=umh –crf=23 –ref=2 –partitions=all –mixed-refs
Audio: AAC, (using neroaac)
The machines are currently encoding and muxing the recordings. So stay tuned to get your hands on the high-quality official recordings soon.
BTW: Thanks to Michael Feiri for his in depth knowledge about the whole encoding process and tools that are used and his help with all the hassle around it.
Several months after I was told about last.fm I finally ended up using it. I installed it yesterday on my main-music-playing-machine and so far the experience is great.
“Last.fm is a service that records what you listen to, and then presents you with an array of interesting things based upon your tastes — artists you might like, users with similar taste, personalised radio streams, charts, and much more.”
Since last.fm is monitoring what music I am listening to it can also be used to create some kind of “personal chart list” which you can put on your website… it may look like this:
In the meantime I was redirected to another quite similar service called “Pandora”. You also get a free high-quality radio from them. But Pandora is working very differently in the inside. Instead of taking the social approach like last.fm, Pandora wants to investigate the “genome of music”:
“Together we set out to capture the essence of music at the most fundamental level. We ended up assembling literally hundreds of musical attributes or “genes” into a very large Music Genome. Taken together these genes capture the unique and magical musical identity of a song – everything from melody, harmony and rhythm, to instrumentation, orchestration, arrangement, lyrics, and of course the rich world of singing and vocal harmony. It’s not about what a band looks like, or what genre they supposedly belong to, or about who buys their records – it’s about what each individual song sounds like.”
Since last.fm has a really nice tool for my mac and is working really well for my music taste I am going to stick with it… for everyone else: go and try both.