- Family and Friends
Archive for category Talks and Slides
“A Dark Pattern is a type of user interface that appears to have been carefully crafted to trick users into doing things, such as buying insurance with their purchase or signing up for recurring bills.”
It’s about time to import some data into our previously established object scheme. If you want to do this yourself you want to first run the Crunchbase mirroring tool and create your own mirror on your hard disk.
In the next step another small tool needs to be written. A tool that creates nice clean GQL import scripts for our data. Since every data source is different there’s not really a way around this step – in the end you’ll need to extract data here and import data here. One possible different solution could be to implement a dedicated importer for the GraphDB – but I’ll leave that for another article series. Back to our tool: It’s called “First-Import” and it’s only purpose is to create a first small graph out of the mirrored Crunchbase data and fill the mainly primitive data attributes. Download this tool here.
This is why in this first step we mainly focus on the following object types:
Additionally all edges to a company object and the competition will be imported in this part of the article series.
So what does the first-import tool do? Simple:
- it then maps all attributes of that deserialized JSON object to attribute names in our graph data object scheme and it does so by outputting a simple query
- Simple Attribute Types like String and Integer are just simply assigned using the “=” operator in the Graph Query Language
- 1:1 References are assigned by assigning a REF(…) to the attribute – for example: INSERT INTO Product VALUES (Company = REF(Permalink=’companyname’))
- 1:n References are assigned by assigning a SETOF(…) to the attribute – because we are not using a bulk import interface but the standard GQL REST Interface it’s necessary that the object(s) we’re going to reference are already in existence – therefore we chose to do this 1:n linking step after creating the objects itself in a separate UPDATE step. Knowing this the UPDATE looks like this: UPDATE Company SET (ADD TO Competitions SETOF(permalink=’…’,permalink=’…’)) WHERE Permalink = ’companyname’
For the most part of the work it’s copy-n-paste to get the first-import tool together – it could have been done in a more sophisticated way (like using reflection on the deserialized JSON objects) but that’s most probably part of another article.
When run in the “crunchbase” directory created by the Crunchbase Mirroring tool the first-import tool generates GQL scripts – 6 of them to be precise:
The last script is named “Step_3” because it’s supposed to come after all the others.
These scripts can be easily imported after establishing the object scheme. The thing is though – it won’t be that fast. Why is that? We’re creating several thousand nodes and the edges between them. To create such an edge the Query Language needs to identify the node the edge originates and the node the edge should point to. To find these nodes the user is free to specify matching criteria just like in a WHERE clause.
So if you do a UPDATE Company SET (ADD TO Competitions SETOF(Permalink=’company1’,Permalink=’company2’)) WHERE Permalink = ’companyname’ the GraphDB needs to access the node identified by the Permalink Attribute with the value “companyname” and the two nodes with the values “company1” and “company2” to create the two edges. It will work just like all the scripts are but it won’t be as fast as it could be. What can help to speed up things are indices. Indices are used by the GraphDB to identify and find specific objects. These indices are used mainly in the evaluation of a WHERE clause.
The sones GraphDB offers a number of integrated indices, one of which is HASHTABLE which we are going to use in this example. Furthermore everyone interested can implement it’s own index plugin – we will have a tutorial how to do that online in the future – if you’re interested now just ask how we can help you to make it happen!
Back to the indices in our example:
The syntax of creating an index is quite easy, the only thing you have to do is tell the CREATE INDEX query on which type and attribute the index should be created and of which indextype the index should be. Since we’re using the Permalink attribute of the Crunchbase objects as an identifier in the example (it could be any other attribute or group of attributes that identify one particular object) we want to create indices on the Permalink attribute for the full speed-up. This would look like this:
- CREATE INDEX ON Company (Permalink) INDEXTYPE HashTable
- CREATE INDEX ON FinancialOrganization (Permalink) INDEXTYPE HashTable
- CREATE INDEX ON Person (Permalink) INDEXTYPE HashTable
- CREATE INDEX ON ServiceProvider (Permalink) INDEXTYPE HashTable
- CREATE INDEX ON Product (Permalink) INDEXTYPE HashTable
Looks easy, is easy! To take advantage of course this index creation should be done before creating the first nodes and edges.
After we got that sorted the only thing that’s left is to run the scripts. This will, depending on your machine, take a minute or two.
So after running those scripts what happened is: all Company, FinancialOrganization, Person, ServiceProvider and Product objects are created and filled with primitive data types
- all attributes which are essentially references (1:1 or 1:n) to a Company object are being set, these are
That’s it for this part – in the next part of the series we will dive deeper into connecting nodes with edges. There is a ton of things that can be done with the data – stay tuned for the next part.
After the overview and the first use-case introduction it’s about time to play with some data objects.
So how can one actually access the data of crunchbase? Easy as pie: Crunchbase offers an easy to use interface to get all information out of their database in a fairly structured JSON format. So what we did is to write a tool that actually downloads all the available data to a local machine so we can play with it as we like in the following steps.
This small tool is called MirrorCrunchbase and can be downloaded in binary and sourcecode here. As for all sourcecode and tools in this series this runs on windows and linux (mono). You can use the sourcecode to get an impression what’s going on there or just the included binaries (in bin/Debug) to mirror the data of Crunchbase.
To say a few words about what the MirrorCrunchbase tool actually does first a small source code excerpt:
So first it gets the list of all objects like the company names and then it retrieves each company object according to it’s name and stores everything in .js files. Easy eh?
When it’s running you get an output similar to that:
And after the successful completion you should end up with a directory structure
The .js files store basically every information according to the data scheme overview picture of part 2. So what we want to do now is to transform this overview into a GQL data scheme we can start to work with. A main concept of sones GraphDB is to allow the user to evolve a data scheme over time. That way the user does not have to have the final data scheme before the first create statement. Instead the user can start with a basic data scheme representing only standard data types and add complex user defined types as migration goes along. That’s a fundamentally different approach from what database administrators and users are used to today.
Todays user generated data evolves and grows and it’s not possible to foresee in which way attributes need to be added, removed, renamed. Maybe the scheme changes completely. Everytime the necessity emerged to change anything on a established and populated data scheme it was about time to start a complex and costly migration process. To substantially reduce or even in some cases eliminate the need for such a complex process is a design goal of the sones GraphDB.
In the Crunchbase use-case this results in a fairly straight-forward process to establish and fill the data scheme. First we create all types with their correct name and add only those attributes which can be filled from the start – like primitives or direct references. All Lists and Sets of Edges can be added later on.
So these would be the Create-Type Statements to start with in this use-case:
CREATE TYPE Company ATTRIBUTES ( String Alias_List, String BlogFeedURL, String BlogURL, String Category, DateTime Created_At, String CrunchbaseURL, DateTime Deadpooled_At, String Description, String EMailAdress, DateTime Founded_At, String HomepageURL, Integer NumberOfEmployees, String Overview, String Permalink, String PhoneNumber, String Tags, String TwitterUsername, DateTime Updated_At, Set<Company> Competitions )
CREATE TYPE FinancialOrganization ATTRIBUTES ( String Alias_List, String BlogFeedURL, String BlogURL, DateTime Created_At, String CrunchbaseURL, String Description, String EMailAdress, DateTime Founded_At, String HomepageURL, String Name, Integer NumberOfEmployees, String Overview, String Permalink, String PhoneNumber, String Tags, String TwitterUsername, DateTime Updated_At )
CREATE TYPE Product ATTRIBUTES ( String BlogFeedURL, String BlogURL, Company Company, DateTime Created_At, String CrunchbaseURL, DateTime Deadpooled_At, String HomepageURL, String InviteShareURL, DateTime Launched_At, String Name, String Overview, String Permalink, String StageCode, String Tags, String TwitterUsername, DateTime Updated_At)
CREATE TYPE ExternalLink ATTRIBUTES ( String ExternalURL, String Title )
CREATE TYPE EmbeddedVideo ATTRIBUTES ( String Description, String EmbedCode )
CREATE TYPE Image ATTRIBUTES ( String Attribution, Integer SizeX, Integer SizeY, String ImageURL )
CREATE TYPE IPO ATTRIBUTES ( DateTime Published_At, String StockSymbol, Double Valuation, String ValuationCurrency )
CREATE TYPE Acquisition ATTRIBUTES ( DateTime Acquired_At, Company Company, Double Price, String PriceCurrency, String SourceDestination, String SourceURL, String TermCode )
CREATE TYPE Office ATTRIBUTES ( String Address1, String Address2, String City, String CountryCode, String Description, Double Latitude, Double Longitude, String StateCode, String ZipCode )
CREATE TYPE Milestone ATTRIBUTES ( String Description, String SourceDescription, String SourceURL, DateTime Stoned_At )
CREATE TYPE Fund ATTRIBUTES ( DateTime Funded_At, String Name, Double RaisedAmount, String RaisedCurrencyCode, String SourceDescription, String SourceURL )
CREATE TYPE Person ATTRIBUTES ( String AffiliationName, String Alias_List, String Birthplace, String BlogFeedURL, String BlogURL, DateTime Birthday, DateTime Created_At, String CrunchbaseURL, String FirstName, String HomepageURL, Image Image, String LastName, String Overview, String Permalink, String Tags, String TwitterUsername, DateTime Updated_At )
CREATE TYPE Degree ATTRIBUTES ( String DegreeType, DateTime Graduated_At, String Institution, String Subject )
CREATE TYPE Relationship ATTRIBUTES ( Boolean Is_Past, Person Person, String Title )
CREATE TYPE ServiceProvider ATTRIBUTES ( String Alias_List, DateTime Created_At, String CrunchbaseURL, String EMailAdress, String HomepageURL, Image Image, String Name, String Overview, String Permalink, String PhoneNumber, String Tags, DateTime Updated_At )
CREATE TYPE Providership ATTRIBUTES ( Boolean Is_Past, ServiceProvider Provider, String Title )
CREATE TYPE Investment ATTRIBUTES ( Company Company, FinancialOrganization FinancialOrganization, Person Person )
CREATE TYPE FundingRound ATTRIBUTES ( Company Company, DateTime Funded_At, Double RaisedAmount, String RaisedCurrencyCode, String RoundCode, String SourceDescription, String SourceURL )
You can directly download the according GQL script here. If you use the sonesExample application from our open source distribution you can create a subfolder “scripts” in the binary directory and put the downloaded script file there. When you’re using the integrated WebShell, which is by default launched on port 9975 an can be accessed by browsing to http://localhost:9975/WebShell you can execute the script using the command “execdbscript” followed by the filename of the script.
As you can see it’s quite straight forward a copy-paste action from the graphical scheme. Even references are not represented by a difficult relational helper, instead if you want to reference a company object you can just do that (we actually did that – look for example at the last line of the gql script above). As a result when you execute the above script you get all the Types necessary to fill data in in the next step.
So that’s it for this part – in the next part of this series we will start the initial data import using a small tool which reads the mirrored data and outputs gql insert queries.
Where to start: existing data scheme and API
This series already tells in it’s name what the use case is: The “CrunchBase”. On their website they speak for themselves to explain what it is: “CrunchBase is the free database of technology companies, people, and investors that anyone can edit.”. There are many reasons why this was chosen as a use-case. One important reason is that all data behind the CrunchBase service is licensed under Creative-Commons-Attribution (CC-BY) license. So it’s freely available data of high-tech companies, people and investors.
Currently there are more than 40.000 different companies, 51.000 different people and 4.200 different investors in the database. The flood of information is big and the scale of connectivity even bigger. The graph represented by the nodes could be even bigger than that but because of the limiting factors of current relational database technology it’s not feasible to try to do that.
sones GraphDB is coming to the rescue: because it’s optimized to handle huge datasets of strongly connected data. Since the CrunchBase data could be uses as a starting point to drive connectivity to even greater detail it’s a great use-case to show these migration and handling.
Thankfully the developers at CrunchBase already made one or two steps into an object oriented world by offering an API which answers queries in JSON format. By using this API everyone can access the complete data set in a very structured way. That’s both good and bad. Because the used technologies don’t offer a way to represent linked objects they had to use what we call “relational helpers”. For example: A person founded a company. (person and company being a JSON object). There’s no standardized way to model a relationship between those two. So what the CrunchBase developers did is they added an unique-Identifier to each object. And they added a new object which is uses as a “relational helper”-object. The only purpose of these helper objects is to point towards a unique-identifier of another object type. So in our example the relationship attribute of the person object is not pointing directly to a specific company or relationship, but it’s pointing to the helper object which stores the information which unique-identifier of which object type is meant by that link.
To visualize this here’s the data scheme behind the CrunchBase (+all currently available links):
As you can see there are many more “relational helper” dead-ends in the scheme. What an application had to do up until now is to resolve these dead-ends by going the extra mile. So instead of retrieving a person and all relationships, and with them all data that one would expect, the application has to split the data into many queries to internally build a structure which essentially is a graph.
Another example would be the company object. Like the name implies all data of a company is stored there. It holds an attribute called investments which isn’t a primitive data type (like a number or text) but a user defined complex data type. This user defined data type is called List<FundingRoundStructure>. So it’s a simple list of FundingRoundStructure objects.
When we take a look at the FundingRoundStructure there’s an attribute called company which is made up by the user defined data type CompanyStructure. This CompanyStructure is one of these dead-ends because there’s just a name and a unique-id. The application now needs retrieve the right company object with this unique-id to access the company information.
Simple things told in a simple way: No matter where you start, you always will end up in a dead-end which will force you to start over with the information you found in that dead-end. It’s not user-friendly nor easy to implement.
The good news is that there is a way to handle this type of data and links between data in a very easy way. The sones GraphDB provides a rich set of features to make the life of developers and users easier. In that context: If we would like to know which companies also received funding from the same investor like let’s say the company “facebook” the only thing necessary would be one short query. Beside that those “relational helpers” are redundant information. That means in a graph database this information would be stored in the form of edges but not in any helper objects.
The reason why the developers of CrunchBase had to use these helpers is that JSON and the relational table behind it isn’t able to directly store this information or to query it directly. To learn more about those relational tables and databases try this link.
I want to end this part of the series with a picture of the above relational diagram (without the arrows and connections).
The next part of the series will show how we can access the available information and how a graph scheme starts to evolve.
If you want to explain how easy it is for a user or developer to use the sones GraphDB to work on existing datasets you do that by showing him an example – a use case. And this is exactly what this short series of articles will do: It’ll show the important steps and concepts, technologies and designs behind the use case and the sones GraphDB.
The sones GraphDB is a DBMS focusing on strong connected unstructured and semi-structured data. As the name implies these data sets are organized in Nodes and Edges objectoriented in a graph data structure.
“a simple graph”
To handle these complex graph data structures the user is given a powerful toolset: the graph query language. It’s a lot like SQL when it comes to comprehensibility – but when it comes to functionality it’s completely designed to help the user do previously tricky or impossible things with one easy query.
This articles series is going to show how real conventional-relational data is aggregated and ported to an easy to understand and more flexible graph datastructure using the sones GraphDB. And because this is not only about telling but also about doing we will release all necessary tools and source codes along with this article. That means: This is a workshop and a use case in one awesome article series.
The requirements to follow all steps of this series are: You want to have a working sone GraphDB. Because we just released the OpenSource Edition Version 1.1 you should be fine following the documentation on how to download and install it here. Beside that you won’t need programming skills but if you got them you can dive deep into every aspect. Be our guest!
This first article is titled “Overview” and that’s what you’ll get:
part 1: Overview
part 2: A short introduction into the use-case and it’s relational data
part 3: Which data and how does a GQL data scheme start?
part 4: The initial data import
part 5: Linking nodes and edges: What’s connected with what and how does the scheme evolve?
part 6: Querying the data and how to access it from applications?
Well if you want just the essence of information that makes you go faster on your daily tasks cheat sheets are just that: the essence of information.
Today I found this cheat sheet particularly useful:
If you – like us – need a picture of a shiny product box of a soon-to-be-released product for your presentation you may want to consider buying several tools to create such shots. But you can also just use a small tool and Windows Presentation Foundation.
There’s a great article on CodeProject where a almost everything is pre-set-up for our needs. And everything is written in C# – great stuff!
In action it looks like this:
There are more than 10 free eBooks available about Python:
… like “Dive into Python”:
“This is a fantastic book that is also available in print. It covers everything, from installing Python and the language’s syntax, right up to web services and unit testing. This is a good book to learn from, but it’s also excellent to use a reference. I frequently find myself visiting the site! If you only read one book on this list make it this one.”
An Introduction to Tkinter
How to think like a Computer Scientist
The Standard Python Library
Invent Your Own Computer Games with Python
The Django Book
The Pylons Book
Data Structures and Algorithms with Object-Oriented Design Patterns in Python
Building Skills in Python
Building Skills in OO Design
Source 1: Dive into Python
Source 2: An Introduction to Tkinter
Source 3: How to think like a Computer Scientist
Source 4: The Standard Python Library
Source 5: Invent Your Own Computer Games with Python
Source 6: The Django Book
Source 7: The Pylons Book
Source 8: Data Structures and Algorithms with Object-Oriented Design Patterns in Python
Source 9: Building Skills in Python
Source 10: Building Skills in OO Design
“This book written by Granville Barnett and Luca Del Tongo is part of an effort to provide all developers with a core understanding of algorithms that operate on various common, and uncommon data structures.
Data Structures and Algorithms: Annotated Reference with Examples is completely free!”
The first draft is available now – and it’s 97 pages.
There’s a new free tool available from officelabs:
“pptPlex is a plug-in that explores an alternate method for presenting a PowerPoint slide deck. Using pptPlex, you can present your slides as a tour through a zoomable canvas instead of a series of linear slides.”
I am using iTunes as my main music player software for about 5 years now. In that time I had to move and restore my growing iTunes library more than 10 times. It can become quite a job to get it done properly so I came across this great howto article to help you and me out in the future:
“I see some discussion about fixing busted iTunes libraries, either when moving one on the same computer or migrating to a new one. Here’s what I have found works for me. Bonus: no slow AppleScripts or payments (donations cheerfully accepted and squandered).
First, what I have discovered about how iTunes manages music collections. There are two files it uses, one that is binary (ie, machine readable for faster performance on searching, sorting, add/edit/delete operations) and one that has the same information but in a human readable format (for a certain subset of humans who can read XML natively). The XML file is written from the binary file as a backup (check the dates to confirm).”
But that isn’t were it needs to stop. I had to do some more things with my iTunes library lately – like extracting all that ratings and exporting them into a new music player software I liked to test. I therefore wrote myself a little tool in C# that does the job of reading in the whole iTunes library and giving you programmatically access to that library. It only needs to have read access to the Mediathek.xml file iTunes stores in it’s music folder and you from there on can work your way through the bazillions of music tracks you may or may not have in your library. It even does the find-and-replace job a bit easier than the solution mentioned in the article above.
I release the code under the CC-Attribution-NonCommercial-ShareAlike 3.0 license and here is your download:
This code is a simple example of how to use the XmlTextReader in C# and how to traverse through them. It should be easy to understand and easy to change. I would love to hear from you when and if it helped you.
Source 1: iTunes library, fixing a broken one or moving one
Source 2: ReadiTunesMediathek.zip (11,82 KB)
Das wunderbare Leipziger Team tritt nun bald wieder mit dem .NET Open Space 2008 in Erscheinung:
“Die besten Gespräche hat man fern ab von einer festgelegten Agenda, bei einem Kaffee und beim “du”. Dort gibt es keine Rollenaufteilung in Sprecher / Zuhörer und die Themen finden sich vor Ort ganz von selbst. Das ist die Idee vom .NET Open Space. Hier sind alle gleich. Auch die Organisatoren halten sich im Hintergrund und moderieren nur ab und an etwas. Die Verantwortlichen der Themenfelder sorgen mit Einladungen für Teilnehmer darin.
.NET Open Space besteht derzeit aus den drei parallelen Themenfeldern:
- Mobile Computing
- Soft Skills”
Eine Agenda gibt es nicht, dafür aber einen Zeitplan:
You may have heard about things like “guidelines for user interfaces” – Sometimes I tend to think that there is no such thing as a design guideline for a better user interface because some applications are just plain unusable for a normal human being.
But there are guidelines for almost everything and I wanted to give an overview:
- Windows XP Guidelines for Applications
- Windows Vista User Experience Guidelines (direct pdf link)
- Office System 2007 User Interface Design Guidelines
- Guidelines for Keyboard User Interface Design
- Apple User Experience Guides Overview
- Apple Human Interface Guidelines
- Apple Web Design Guide (oooold)
- KDE Standards User Interface Guidelines
- GNOME Human Interface Guidelines
- Motif Style Guide
I am once again pleased to present the official Trailer for this years FIWAK. FIWAK is the annual outdoor-conference presented by FeM e.V.. This year these lectures are planned (german only):
- Openstreetmap-Workshop von Markus Brückner und Dominik Tritscher
- Technische Grundlagen DVB-T von Sebastian Schwarz
- Opensource Videobearbeitung von Florian Raschke
- FeM-Geschichte von Mario Holbe
- Vereinsinterne Kommunikation von Michael Bock
- Tanzworkshop mit Udo Pescheck
- Bewerbungstraining mit MLP
- Whiteboard-Technologien von Smart Systems
FIWAK takes place from 20. to 22. June 2008 in the forest around Elgersburg – a small town near Ilmenau. But now watch the trailer:
Video: FeM FIWAK 2008 Trailer
Source 1: FIWAK Homepage
Once again it’s time for the annual forest-lan-partyesk-camp organized and held by FeM e.V..
It’s the 5th FIWAK (FemImWaldAußerKontrolle) taking place from 20th to 22nd June 2008 in the Forest (Freilichtbühne) near Elgersburg/Germany.
You can still sign up if you like to come and watch the lectures and camp with the people there. If you like to get an more detailed impression of the last FIWAK just take a look here.
Source 1: everything about FIWAK on this blog.
Source 2: FIWAK Einschreibesystem
The agenda of this years STC is online. You can take a look here.
“Das Datum steht fest: Unsere STC 2008 findet am 15.05.2008 statt!
Wir laden Dich herzlich nach Berlin ein und freuen uns auf einen tollen Tag mit Dir! Es erwartet Dich eine tolle Location, spannende Vorträge und Austausch mit Microsoft-Experten und –Ansprechpartnern, so dass Du ganz im Sinne des Networkings Deiner Karriere auf die Beine helfen kannst.
Zudem hast Du hier die Chance mit zu verfolgen, welches Imagine Cup Team im Software Design die deutsche Fahne beim internationalen Finale in Paris vertreten wird. Der Imagine Cup ist der weltweit größte Technologiewettbewerb für Schüler und Studierende – alle Infos zum Wettbewerb findest Du unter www.imaginecup.info.”
Stattfinden wird die STC dieses Jahr in der Kalkscheune in Berlin.
As ususal here’s the schematic overview of the things behind the curtain:
“The Chaos Communication Camp is an international, five-day open-air event for hackers and associated life-forms. The Camp features two conference tracks with interesting lectures, a workshop-track and over 30 villages providing workshops and gettogethers covering a specific topic.”
Chaos Communication Camp 2007
The International Hacker Open Air Gathering
8|9|10|11|12th August 2007
Finowfurt near Berlin, Germany (Old Europe)
“You can participate! Bring your tent and join our villages. The Camp has everything you need: power, internet, food and fun. The 100.000 square meter areal features enough space to camp, cozy places to hang out and a nice pool and lake to swim and do nautic experiments.”
There are two lecture halls called “foo” and “bar”:
Today I had a talk about IP-TV in our local research network – a project I am involved in the past year. And since I did some of the legal and coding work (YAPS) I was the one who wanted to talk about it the most…
First here’s the slidedeck:
The talk was recorded and you can watch it as soon as the post-production team has finished working on it – I’ll keep you posted.
Source: Slidedeck as PDF
We spent the last two days in Duisburg attending the Student Technology Conference 2007.
“From Software Architecture, User Interface and Robotics to Games Development with the XNA framework: in use and brand new technologies of the IT – industry will be presented and demonstrated at technologically advanced level. Microsofts Student Technology Conference is the perfect opportunity to engage with potential employers and to get in touch with Microsofts .NET technology. Additionally the German Finals in Microsofts premiere technology competition, the Imagine Cup 2007, will have its final in the Software Design invitational right here!”
The trailer for this years FIWAK is done:
I had a talk yesterday about Windows Vista for developers. You can grab the slides here:
Here are my slides of my talk:
And I said that I would link to the article about .NET Reflectors impact on “Serial Keygen”, go and read it here.
I wrote quite a lot of code for the 23rd Chaos Communication Congress. And because of that I want to make it publicy available for everyone to download and use. It’s all GPL (because of the libaries used) so use it according to the license.
You can learn how to:
deserialize the pentabarf schedule.en.xml file
create a valid congress filename
create and manipulate animations with text and bitmaps
store those animations in AVI container files
Und wie versprochen gibt es hier nun die Slides des .NET Compact Framework Vortrags (Version 1.0 und 2.0)
The planning process started way earlier than last year but as usual some things remain until the very last minute before everything starts. But since this year the teamwork and enthusiasm was extraordinary we finally made it almost as planned. We surely did not reach our goal of releasing the recordings hours after the talk ended. Mainly because we underestimated the amount of knowledge and pain it took to actually get the recordings running on the iPod. It was one of the guidelines for this years official recordings: they had to run on current generation video iPods and they had to have all the metatags. So the team did it and we ended up releasing the first half (nearly) of the 23c3 recordings into the public only 2 weeks instead of the 4-6 months of last year. Even better: we managed to improve the video quality and even got smaller files. That wouldn’t been possible without the encoding-pipeline knowledge that Michael Feiri brought into the team. With that knowledge it also took several days to actually build the working pipeline.(yes the iPod is one special piece of hardware) The complete encoding-pipeline we used will be documented and released soon.
So after all that planning we finally packed our stuff and hit the road:
After some hours and the unpacking the video studio looked like this:
here’s a close up of the Windows Media Encoders, the h.264 encoding machine and the storage (all from behind):
So we surely brought enough processing power to berlin. And this is what did take the picture:
Want to see what it’s like watching “out of the window” of the video studio? No problem:
or how about another view of the studio:
So. So you had some pictures of the video studio and the setup. But I bet you want to know some more details about the setup itself. I created an overview for you:
Since last years setup was completely digital (planned) and we ended up using the DV-tape backup since all the recordings were screwed up we thought it might be a good idea to go back one step and use analog video as in FBAS to transport and record the talks. We also had the DV-tape backup this year and to be honest in some cases we have to fall back to that. Less than 10 out of 130 recordings are screwed up so we have to use the DV backup. That means 120 of them worked out as planned. Great! We are currently cutting and encoding them and as you read this more than half of them should be up on the official servers and the mirrors.
Now in the aftermath of the congress we learned a new lesson: there’s maybe one or another speaker that would not allow us releasing their talk recording. In the future we think of having something like a “don’t record me”-list to avoid misunderstandings.
The first 53 recordings of the 23c3 are currently uploaded to the server. The remaining ones will be available in the next few days.
You have the following choices:
Yesterday the first usergroup meeting took place in the computer pool of the faculty of mechanical engineering. Sven and Nico had their lectures for which you can get the slides as soon as possible on the website of the usergroup.
The event is over and it was great! More than 7 hours of new information compressed into 8 talks were presented today. If you missed the event, don’t worry: you can download the slidedecks here and of course if you like, you can participate in another University-Roadshow 2006 event in another german city (complete list and subscription here). If you like to attend some more talks at the TU-Ilmenau you can watch out for the local community website: www.dotnetcommunity.de – Since we’re in the process of building a INETA .NET Community here in Ilmenau we’re planning several events in the next months. Oh, to name one: on the 25th and 26th of next month there’s a ASP.NET workshop, held by my colleague Nico Orschel. More information on that can be found on dotnetcommunity.
The Slidedecks are available in three different formats(german language versions only):
Tomorrow the US TechEd takes off. If you cannot attend personally you can attend it virtually.
Microsoft hosts a website called “Virtual TechEd” where you can watch talks and keynotes via streaming video.
Beside watching the talks you can listen to the TechEd radio livestream.
Source 1: http://virtualteched.com/
Source 2: TechEd radio livestream
As this is a germany-only-offer I’ll do it in german:
“Die Entscheidung ist gefallen! Die überwältigende Anzahl von 6477 Teilnehmern hat es der Jury sehr schwer gemacht. Dennoch stehen die deutschen Vertreter für das Weltfinale in Indien jetzt fest. Die vielen tollen Ideen und Euer super Engagement möchten wir mit Euch gebührend feiern – denn eine Party unter Palmen bei 25° Lufttemperatur und 31° Wassertemperatur geht auch in Deutschland. Wir laden Dich hiermit herzlich zur Imagine Cup Beach Party am 30. Mai 2006 in das Tropical Island Resort nach Brandenburg ein.
360 m lang, 210 m breit und 107 m hoch, eine Grundfläche von 6,6 Hektar mit 7000 Kubikmetern Wasser und über 20.000 tropischen Pflanzen, dies ist genau der richtige Ort für einen würdigen Abschluss des Imagine Cup 2006 in Deutschland.
Deck-Chair-Coding, karibischer Flair, Beachparty in der größten freitragenden Halle der Welt, und das alles kostenlos, … lass Dich überraschen!“
Es sind noch ca. 50 Plätze für Studenten frei. Also los und anmelden! Die Vergabe findet nach dem First-Come-First-Serve Prinzip statt.
Hier auch ein kurzer Blick auf die Agenda:
Source 1: Imagine Cup Beachparty 2006
Source 2: Microsoft Imagine Cup 2006
The complete MIX conference lectures (slides+videos) are online and ready to be downloaded. Very much information…
“The MIX conference is a 72-hour conversation between Web developers, designers and business leaders. When you attend MIX you’ll learn the latest about IE7, Windows Media, Windows Live!, as well as “Atlas”, Microsoft’s new AJAX framework.”
Berkeley University of California just made a great number of their audio courses available for free download on iTunes. Just tune in and get a taste of cal.
I actually got a taste of the incomparability of two universities… The courses are great!
LISTEN TO EVENTS about the Arts, Education, Politics, Science and Technology
BE CONNECTED with what’s happening at UC Berkeley
But Berkeley is not the only university which has some sort of online-courses. FeM e.V. offers you a growing number of complete courses of the TU-Ilmenau with video+audio.
So here are some news about the 22c3 recordings:
According to the last information I got, 130 of 146 recordings are ready to go. I don’t know why the team decided to release them all at once only but unfortunately you’ll have to be patient.
The release is planned for THIS WEEK. So stay tuned and check back for more information.
So here we are. The January passed by and still no recordings. What happened?
picture by namenlos
Straight after the congress we started working on the recordings – but soon we realised that almost all of the recordings we made live with ffmpeg and his friends were corrupted: The audio and video is just not in sync.
Well it would be easy if it was just shifted for a given amount of time – but the amount of shifting time is changing all through the recordings.
So we engaged DEFCON 4 and got back on our backup solution that is there, just in case something would go wrong. (who would have thought that?!).
So what’s DEFCON 4? We came to the congress with 400 brand new DV tapes. And that’s simply what our backup solution is: everything that was recorded during the 22c3 is on DV tape. And it’s in sync there.
So we are extracting nearly all of the recordings from those DV tapes…
And as you can imagine: this takes some time. It takes less time than we’d expected – we are making serious progress. Together with the CCC it was decided that an intro and outro should be added to each recording – that also takes some time. If you are experienced in creating scripted/batched DV material with definable text… we obviously need your help 🙂
So when will the recordings be available? We hope very very soon. Like I said we are making good progress in getting the stuff off the tapes, but it’s very difficult to give a time frame. Check back here or on the official 22c3 FeM Homepage for updates – … well if we can get the intro/outro issue working fast it’ll probably be only days away.
If you have any comment, feel free to comment here.
So here we are: a new year and just two days after the 22c3. As we can tell everything was recorded as planned and everything went just great.
I want to tell everybody who helped to make this happen: Thank you very much. It was a pleasure and great fun to work with you guys. The results that we all together achieved speak for themselves: nearly 1 Tbyte of downloaded live-stream bytes. Nearly 400 listeners on our streams at peak times.
In fact there is a lot to do afterwards: We have to cut the MPEG2 files (5 Mbit) of each lecture to set start and end-times correctly. We have to tag them and make them available (1 to 5 gigabyte each) for you as soon as possible – which means when the last hard drive arrives from berlin here in Ilmenau.
When all MPEG2 files are complete we complete our MPEG4 encodings – which means: we are already encoding everything we have in MPEG4 1.2 Mbit.
This is our main focus at the moment. It should be possible to make everything of the above mentioned available within January. After all this is done (or maybe in the meantime, we don’t know at the moment) the remaining WMV on-demand streams will become available as we have to reencode. So check back here to get more information and updates about this topic.
I often was asked why the WMV on-demand streams were only available for the first day: the answer is easy – we tested if it’s feasible to cut them nearly live – and we came to the conclusion that it raises the stress bar for our team to high to handle it the complete 4 days of the conference. So we changed our plans to ensure that you have a mostly flawless live-streaming experience.
At the end: Here is the list of the people I want to thank for their support and help at 22c3 (without any order actually):
laforge and his team, maedness, mucki, namenlos, manu, cosrahn, Agtmulda, yray, cutcat, ecki, ahzf, ambanus, somi, all the video and audio angels that made the great audio and video possible, Ambion for the great support, the POC who helped were our cable guys and supported us even with coffee, the NOC which made IPv6 happen finally and made the best conference network ever possible…. and a whole lot more people I forgot.
22. Chaos Communication Congress
Scanning your GPRS/UMTS IP network for fun and profit
We are giving an overview of ip networks used for >=2.5G technologies. Our main focus is on scanning the overlaying ip network, on different Voice-over-IP filter implementations and the possibilities to circumvent them.
We want to explain the ip networks used in GPRS and UMTS cellular networks from the enduser point of view. How do they work today and what has to be done to get a normal webpage, voice-over-ip or even a video stream onto your PDA or SmartPhone.
For your private investigations inside your providers ip network we want to demonstrate you a tcp/udp port and round-trip-time based traceroute program based on the .NET compact framework. With the help of this program we want to analyse the anti voice-over-ip filters implemented by different cellular providers and show you some possibilities how to circumvent them _efficently_. So we don’t just tunnel all the traffic through a VPN. But even when these filters become more sophisticated in the future we want to present some ideas how to defeat your right to talk via voice-over-ip whereever and whenever you want to.
PrivateInvestigationNetworkToolSrc.zip (239,1 KB)
Nico held is talk this friday – and here are his slides for you to download.
DotNET_Softwaretechnologie_fuer_das_Internet_2.ppt (1,79 MB)
It’s time for another What The Hack wrap-up session: (slides are in german)
My slides are based on Martin Herfurts talk at the What The Hack 2005. You can get his slidesdeck at trifinite.org.
Nico wrote a cool article about the TerraServer WebService and how to use it with Visual Studio 2005. He uses this WebService in his talks as a WebService example – a great idea: The problem is now, that the example sourcecode does not work under Visual Studio 2005. So he changed it to work – and you can download the new sourcecode here.
Source 1: Nico’s Weblog
Source 2: TerraServer WebService
Source 3: Example Source
Source 4: Download VS2005 compatible Sourcecode
Here are the slides for my talk at the “.NET Chaostage” at the FH-Deggendorf. They are in german so be warned.
Slides Download: Einführung in .NET 2.0.ppt (2,74 MB)
Demos Download: Einführung in .NET Demos.zip (45,13 KB)
Oh…and Torsten Weber took some nice pictures of the FH Deggendorf campus:
In the future I will put all the slides I have from my talks online at schrankmonster. I will create a dedicated categorie: Talks and Slides
Slides Download: WTH_Exploiting_Pocket_PC.ppt (1,23 MB)
My slides are based on Collin Mulliners talk at the What The Hack 2005.