Kinect Apps Challenge

Graphics & Media Lab and Microsoft Research Cambridge have announced a contest for applications that use Kinect sensor. The authors of the five brightest apps will be funded to attend the 2012 Microsoft Research PhD summer school. I blogged about the last year's event in my previous post. Details of the contest are here.

I guess there's no need to explain what Kinect is. It is extremely successful combined colour and depth sensor by Microsoft. It is being distributed for killer price (≈150) as an add-on for Xbox, although it is hard to imagine the range of possible applications. Controlling a computer only by gestures is considered as a primer of natural user interface. To name some applications beyond gaming, this kind of NUI is useful to help surgeons to keep their hands clean: Kinect helps blind people to navigate through buildings: For more ideas look at the winners of OpenNI challenge. OpenNI is an open-source alternative to Kinect SDK. Its strong point is it is integrated with PCL, but beware that you cannot use it for the contest. Only Kinect for Windows SDK is allowed. Kinect is probably the most successful commercial outcome from the Microsoft Research lab. MSR is unique because they do a lot of theoretical research there, and it is unclear if it is profitable for Microsoft to fund it. But the projects like Kinect reveal the doubts. I will post about MSR organization in comparison to other industrial labs in one of the later posts. All the summer schools This summer I attended two conferences (3DIMPVT, EMMCVPR) and two summer schools. I know my latency is somewhat annoying, but it's better to review them now then never. :) This post is about the summer schools, and the following is going to be about the conferences. PhD Summer School in Cambridge Both schools were organized by Microsoft Research. The first one, PhD Summer School was in Cambridge, UK. The lectures covered some general issues for computer science PhD students (like using cloud computing for research and career perspectives) as well as some recent technical results by Microsoft Research. From the computer vision side, there were several talks: • Antonio Criminisi described their InnerEye system for retrieval of similar body part scans, which is useful for diagnosis based on similar cases' medical history. He also featured the basics of Random Forests as an advertisement to his ICCV 2011 tutorial. The new thing was using peculiar weak classifiers (like 2nd order separation surfaces). Antonio argued they perform much better then trees in some cases. • Andrew Fitzgibbon gave a brilliant lecture about pose estimation for Kinect (MSR Cambridge is really proud of that algorithm [Shotton, 2011], this is the topic for another post). • Olga Barinova talked about the modern methods of image analysis and her work for the past 2 years (graphical models for non-maxima suppression for object detection and urban scene parsing). The other great talks were about .NET Gadgeteer, the system for modelling and even deployment of electronic gadgets (yes, hardware!), and F#, Microsoft's alternative to Scala, the language that combines object-oriented paradigm with functional. Sir Tony Hoare also gave a lecture, so I had a chance to ask him how he ended up in Moscow State University in the 60s. It turns out he studied statistics, and Andrey Kolmogorov was one of the leaders of the field that time, so that internship was a great opportunity for him. He said he had liked the time in Moscow. :) There were also magnificent lectures by Simon Peyton-Jones about giving talks and writing papers. Those advices are the must for everyone who does research, you can find the slides here. Slides for some of the lectures are available from the school page. The school talks did not take all the time. Every night was occupied by some social event (go-karting, punting etc.) as well as unofficial after-parties in Cambridge pubs. Definitely it is the most fun school/conference I've attended so far. Karting was especially great, with the quality track, pit-stops, stats and prizes, so special thanks to Microsoft for including it to the program! Microsoft Computer Vision School in Moscow This year, Microsoft Research summer school in Russia was devoted to computer vision and organized in cooperation with our lab. The school started before its official opening with a homework assignment we authored (I was one of four student volunteers). The task was to develop an image classification method capable to distinguish two indoor and two outdoor classes. The results were rated according to the performance on the hidden test set. Artem Konev won the challenge with 95.5% accuracy and was awarded a prize consisted of an xBox and Kinect. Two years ago we used those data for the projects on Introduction to Computer Vision course, where nobody reached even 90%. It reflects not just the lore of participants, but also the progress of computer vision: all the top methods used PHOW descriptors and linear SVM with approximate decomposed χ2 kernel [Vedaldi and Zisserman, 2010], which were unavailable that time! In fact, Andrew Zisserman was one of the speakers. Andrew is the most cited computer vision researcher and the only person whose Zisserman number is zero. :) His course was on Visual Search and Recognition, including instance-level and category-level recognition. The ideas that were relatively new: • when computing visual words, sometimes it is fruitful to use soft assignments to clusters, or more advanced methods like Locality-constrained linear coding [Wang et al., 2010]; • for instance-level recognition it is possible to use query expansion to overcome occlusions [Chum et al., 2007]: the idea is to use the best matched images from the base as new queries; • object detection is traditionally done with sliding window, the problems here are: various aspect ratio, partial occlusions, multiple responses and background clutter for substantially non-convex objects; • for object detection use bootstrapped sequential classification: on the next stage take the false negative detections from the previous stage as negative examples and retrain the classifier; • multiple kernel learning [Gehler and Nowozin, 2009] is a hot tool that is used to find the ideal linear combination of SVM kernels: combining different features is fruitful, but learning the combination is not much better than just averaging (Lampert: “Never use MKL without comparison to simple baselines!”); • movies are common datasets, since there are a lot of repeated objects/people/environments, and the privacy issues are easy to overcome. The movies like Groundhog Day and Run Lola Run are especially good since they contain repeated episodes. You can try to find the clocks on Video Google Demo. Zisserman talked about PASCAL challenge a lot. During a break he mentioned that he annotated some images himself since “it is fun”. One problem with the challenge is we don't know if the progress over years really reflects the increased quality of methods, or is just because of growth of the training set (though, it is easy to check). Andrew Fitzgibbon gave two more great lectures, one about Kinect (with slightly different motivation than in Cambridge) and another about continuous optimization. He talked a lot about reconciling theory and practice: • the life-cycle of a research project is: 1) chase the high-hanging fruit (theoretically-sound model), 2) try to make stuff really work, 3) look for the things that confuse/annoy you and fix them; • for Kinect pose estimation, the good top-down method based on tracking did not work, so they ended up classifying body parts discriminatively, temporal smoothing is used on the late stage; • “don't be obsessed with theoretical guarantees: they are either weak or trivial”; • on the simplest optimization method: “How many people have invented [coordinate] alternation at some point of their life?”. Indeed, the method is guaranteed to converge, but the problems arise when the valleys are not axis-aligned; • gradient descent is not a panacea: in some cases it does small steps too, conjugate gradient method is better (it uses 1st order derivatives only); • when possible, use second derivatives to determine step size, but estimating them is hard in general; • one almost never needs to take the matrix inverse; in MATLAB, to solve the system Hd = −g, use backslash: d = −H\g; • the Friday evening method is to try MATLAB (implementing the derivative-free Nelder-Mead method). Dr. Fitzgibbon asked the audience what the first rule of machine learning is. I hardly helped over replying “Never talk about machine learning”, but he expected the different answer: “Always try the nearest neighbour first!” Christoph Lampert gave lectures on kernel methods, and structured learning, and kernel methods for structured learning. Some notes on the kernel methods talk: • (obvious) don't rely on the error on a train set, and (less obvious) don't even report about it in your papers; • for SVM kernels, in order to be legitimate, a kernel should be an inner product; it is often hard to prove it directly, but there are workarounds: a kernel can be drawn from a conditionally positive-definite matrix; sum, product and exponent of a kernel(s) is a kernel too etc. (thus, important for multiple-kernel learning, linear combination of kernels is a kernel); • since training (and running) non-linear SVMs is computationally hard, explicit feature maps are popular now: try to decompose the kernel back to conventional dot product of modified features; typically the features should be transformed to infinite sums, so take first few terms ; • if the kernel can be expressed as a sum over vector components (e.g. χ2 kernel\sum_d x_d x'_d / (x_d + x'_d)$), it is easy to decompose; radial basis function (RBF) kernel ($\exp (\|x-x'\|^2 / 2\sigma^2)$) is the exponent of a sum, so it is hardly decomposable (more strict conditions are in the paper); • when using RBF kernel, you have another parameter σ to tune; the rule of thumb is to take σ² equal to the median distance between training vectors (thus, cross-validation becomes one-dimensional). Christoph also told a motivating story why one should always use cross-validation (so just forget the previous point :). Sebastian Nowozin was working on his [ICCV 2007] paper on action classification. He used the method by Dollár et al. [2005] as a baseline. The paper reported 80.6% accuracy on the KTH dataset. He outperformed the method by a couple of per cents and then decided to reproduce Dollár's results. Imagine his wonder when simple cross-validation (with same features and kernels) yielded 85.2%! So, Sebastian had to improve his method to beat the baseline. I feel I should stop writing about the talks now since the post grows enormously long. Another Lampert's lecture and Carsten Rother's course on CRFs were close to my topic, so they deserve separate posts (I already reviewed basics of structured learning and max-product optimization in this blog). Andreas Müller blogged about the recent Ivan Laptev's action recognition talk on CVML, which was pretty similar to ours. The slides are available for all MSCVS talks, and videos will be shared in September. There were also several practical sessions, but I personally consider them not that useful, because one hardly ever can feel the essence of a method in 1.5 hours changing the code according to some verbose instruction. It is more of an art to design such tutorials, and no one can really master it. :) Even if the task is well-designed, one may not succeed performing it due to technical reasons: during Carsten Rother's tutorial, Tanya and me spent half an hour to spot the bug caused by confusing input and index variable names (MATLAB is still dynamically typed). Ondrej Chum once mentioned how his tutorial was doomed since half of the students did not know how to work with sparse matrices. So, practical sessions are hard. There was also a poster session, but I cannot remember a lot of bright works, unfortunately. Nataliya Shapovalova who won the best poster award, presented quite interesting work on action recognition, which I liked as well (and it is not the last name bias! :) My congratulations to Natasha! The planned social events were not so exhaustive as in Cambridge, but self-organization worked out. The most prominent example was our overnight walk around Moscow, in which a substantial part of school participants took part. It included catching last subway train, drinking whiskey and gin, a game of guessing hallucinating names of each other, and moving a car from the tram rail to let the tram go in the morning. :) I also met some of OpenCV developers from Nizhny Novgorod there. MSCVS is a one-time event, unfortunately. There are at least three annual computer vision summer schools in Europe: ICVSS (the most mature one, I attended it last year), CVML (held in France by INRIA) and VSSS (includes sport sessions besides the lectures, held in Zürich). If you are a PhD student in vision (especially in the beginning of your program), it is worth attending one of them each year to keep up with current trends in the vision community, especially if you don't go to the major conferences. The sets of topics (and even speakers!) have usually large intersection, so pick one of them. ICVSS has arguably the most competitive participant selection, but the application deadline and acceptance notification are in March, so one can apply to the other schools if rejected. Google search by image Last week Google introduced Search by Image feature. There were a handful of web-sites that suggested content-based image retrieval in the Internet, but the quality was low, as I blogged earlier. I repeated the queries TinEye failed at, and Google image search found them both! It found few instances of Marion Lenbach (one of them from this blog, which means the coverage is large!), and I finally remembered the movie from the HOG paper: The Talanted Mr. Ripley. So, from the quick glance Google finally accomplished what the others could not do for ages. Why did they succeed? There are two possible reasons: large facilities that allow building and storing large index efficiently, and a unique technology. The former is surely the case: it seems the engine indexed a large portion of the photos in the web. I cannot say anything about the technology: there is nothing about that among the Google's CVPR papers, so one need to do black-box testing to see which transformations and modifications are allowed. Google seems to expand to the areas of multimedia web, even where the niche is already occupied. Recently they announced their alternative to grooveshark, the recommendation system for Music beta (the service is unfortunately available on invitation basis in the US only). The system is not based on collaborative filtering (only), they (also) analyse the content. I planned to investigate this area too, but given that it is becoming mature and thus not so alluring. I am eager to see if the service will succeed. After all, Buzz did not replace Twitter, and Orkut did not replace Facebook. On structured learning This post is devoted to structured learning, a novel machine learning paradigm which may be used for solving different computer vision problems. For example, finding optimal parameters of graphical models is essentially a structured learning problem. I am going to give an introductory talk on structured learning for Dmitry Vetrov's graphical models class this Friday at 4:20 pm, so please feel free to drop by if you are in Moscow (the talk is to be in Russian). Structured learning is basically a very general supervised learning setting. Consider the classification setting as a basic problem. One needs to find parameters of a function that maps a feature vector to one of the pre-defined class labels$\mathbb{R}^m \to \{c_1, c_2, \dots, c_K\}$. The fundamental property is the classes are unordered and orthogonal. The latest means the probabilities of objects classified as different labels are uncorrelated (some negative correlation is normal since strong classifying of an object as$c_i$decreases the probability for rest of the labels). Now consider regression, which is another supervised learning problem where feature vectors map to real values:$\mathbb{R}^m \to \mathbb{R}$. It might be considered a classification problem with infinite number of classes. There are two obvious flaws in this reduction. First, a training set is unlikely to have at least one example for each class. To overcome this one can quantize the codomain and train a classifier over a finite set of class labels. However, it leaves us with the second flaw: the method does not take into account correlation between the possible outcomes. The bins are ordered: the training features that correspond to neighbouring bins should be handled differently from those that correspond to distant bins. That's why some global methods (like linear regression) are used. The similar situation is in the case of a structured outcome. In this case, one usually has a plenty of possible outcomes, but they are not independent. The techniques of structured learning are applicable when the outcomes have some underlying structure, and the elements of the structure have similar sets of features. Also, it should be possible to estimate how bad the prediction is (often in terms of incorrectly predicted elements), which is called structured loss. The methods allow small deviations from the ideal training outcome, which are possible e.g. because of noise, but penalize the substantial ones (just like regression!). The prediction function (parameters of which are tuned) can thus be formalized as a mapping$\mathbb{R}^{m \times l} \to \{c_1, c_2, \dots, c_K\}^l$. The possible example is hidden Markov model learning, where the elements are emission potentials and transition potentials, and the loss can be defined as the number of wrong HMM outputs. According to the upper formalization, there are$l$elements, each represented by$m$features (in practice,$m$can vary for different types of elements). Since the labels of transition elements are strictly defined by emission outputs, not every outcome is possible. Another example of a structured prediction problem is natural language parsing. Given a sentence of a natural language, the corresponding parsing tree is to be constructed. Surprisingly, parsing trees could also be represented as high-dimensional vectors, with constraints applied. To summarize, structure prediction has outcome that is multivariate, correlated and constrained. Ideally, the parameters of structured prediction should be tuned via likelihood maximization, but this turns out to be intractable due to the need of computing the partition function on each gradient optimization step. That's why the L2-regularized hinge loss is usually minimized. The prediction algorithm is represented as a scoring function over possible outcomes. The task of the learning algorithm is to find the parameters of the scoring function to make it return maximum values for the true outcomes on the training set, and low ones for the instances that are far from optimal. Given that, the margin between good and bad possible outcomes should be maximized (in terms of scoring function value). You can learn more about structure learning from Ben Taskar's NIPS 2007 tutorial. See also our recent 3DIMPVT paper for an up-to-date survey of MRF/CRF training methods and their applications. PS. I've installed MathJax equation engine into the blog. Seems to work, huh? [CFP] GraphiCon-2011 and MSCVSS Our lab traditionally organizes GraphiCon, the major (and may be the only) conference in Russia specialized in computer graphics and computer vision. The conference is not very selective, still it has a decent community of professionals behind it. So, if you want to get a quality feedback on your preliminary work, or always wanted to visit Russia, consider submitting a paper by May, 16. Special offer for young readers of my blog: If it is the place where you learned about the conference and decided to submit a paper, I'll buy you a beer during the conference. :) Consider it another social micro-event. Another piece of news is primarily for undergrads and PhD students from Russia. Our lab and Microsoft Research organize another event this summer: Computer vision summer school. There are such lecturers as Cristoph Lampert and Andrew Zisserman, among others. Participation is free of charge, and accommodation is provided. Deadline is on April, 30. You should not miss it! IEEE goes evil? In November, the IEEE released a new copyright agreement, that is to be signed by the authors of all papers published by the organization since January 2011. The main novelty is that the authors are not allowed to publish final versions of their papers on-line. Fortunately, we still may post the versions accepted for print (with post-review corrections), which is still great, indeed. You can find the FAQ concerning the new policy here. It seems the IEEE realizes that nowadays there is almost no need in their printed materials and the digital library as search engines manage to find papers posted by their authors. No doubt that they do not want to restrict dissemination of scientific results, but that is a question of their survival as a major publishing organization. The worst is they use the drug-dealer strategy: first they organize great conferences and journals (the majority of top venues in my field are run by IEEE), and allow authors to re-publish anything on-line ("the first hit for free"), then cut it off. Yes, one still may post an accepted version, but who knows what is their next step? The IEEE motivate their decision by preserving the value of their database: "the IEEE is better able to track usage of articles for the benefit of authors and journals". One can object that there are big and growing open-access databases like Mendeley, where all the statistics is open and refined (the personal info of users is known). Also, not all institutions have an access to the library. For example, my university (the largest one in Russia) does not. And I cannot afford to pay$25 per paper.

What can we do about that? Sure, it is impossible to stop publish in IEEE venues, although the community can find the way around, as was demonstrated by the founders of JMLR, now the major journal in machine learning, which was created to cancel the overhead of the commercial publishing model. Matt Blaze encourages the researchers not to serve as reviewers for IEEE. As for me, I'm too young (as a researcher) to be a reviewer. But it also has effect on me: I am now preparing a final version for the proceedings published by the IEEE, and I doubt if there is any use in improving my paper (even given a quality review), assuming that majority of researchers will use the accepted version and will never see the revised one.

UPD (Mart 14, 2011). I should have misunderstood the new policy at first, Blaze's post and IEEE terminology somehow misled me. The accepted paper means accepted for print by the IEEE, not the submitted for review. So, you still can use review results to correct the paper and post it on-line, then submit for publishing, where the formatting will probably be adjusted. So, any meaningful part could be reflected in the on-line version. I have corrected the post now, so it may seem weird.

About a month ago I visited CERN, arguably the most famous research organization in Europe. It is the place where the World Wide Web has been invented, and the home of the Large Hadron Collider. I was very excited about the trip, much like Sheldon Cooper. Here are some facts which were new to me:

• CERN was founded after the WWII by a bunch of European countries. They were exhausted by the war, and the only way to catch up with the USA and USSR in fundamental science was to join forces.
• The original name was Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research), which abbreviated to CERN. Later the name has been officially changed to European laboratory for particle physics, which is both more relevant and less fearful for the locals, however the brand CERN is used now even in official documents.
• There are almost 3,000 full-time employees, but most of them are engineers and not scientists. There are a lot of visiting researchers though.
• There are 20 member states now (primarily EU states), and 6 observer states (such as Russia and the USA).
• CERN's annual budget is about € 1 billion, it is funded by the member states in proportion to their economical power, e.g. Germany gives 20% of the money.
• The budget money are spent to infrastructure and support, all the individual experiments are funded by research groups and their universities.
• In spite of the USA is not a member state, it leads on the number of researchers who work on CERN projects (more than thousand), second is Germany, third is Russia (yes, we still have a good shape in particle physics). It turned out that everybody at CERN spoke Russian, even the janitor. :)

• It is in fact a circular tunnel of 27 km in circumference lying 175 m beneath the ground.
• The tunnel was used before the LHC, it was build in 1983 for the Large Electron-Positron Collider. In 2008 it was upgraded to be able to accelerate heavy particles like protons to become the Large Hadron Collider (remind that proton and neutron are thousand times heavier than electron).
• The tunnel is about 4 meters in diameter, one can walk there or ride a bike.
• The tunnel encapsulates two small pipes for the particles that intersect at four points (to make the tracks' lengths equal, like in speed scating arenas). There are more then a thousand of electric magnet dipoles along the pipe. They are not that big as I imagined before.
• It is nearly vacuum and zero temperature inside the tubes.
• Proton beams are not generated inside the LHC. First, they are accelerated in the linear accelerator and almost reach the speed of light c. While the speed is rising, it becomes harder and harder to increase it since it cannot overcome the speed of light. Then they are accelerated in the small circular accelerator, and only after that they are injected into the LHC where during 40 minutes the beams are accelerated to speed as much close to c as possible.
• When accelerated, the beams are being observed during 10 hours. They suffer about 10,000 collisions per second, about 20 pairs of protons collide each time. Since the speed is large, the energy of that collisions is enormous.
• The collisions take place within special locations called detectors. We visited a control centre of one of them, ATLAS. A detector has multiple layers, each able to register certain kind of particles, like photons.
• Ten thousand collision per second would yield really big amount of data, so only few of them are selected to be logged. I don't know how they select those collisions, machine learning might be used. :)
• All the collected data are spread into servers all over the world. An authorized researcher may log in to the grid network and execute her script to analyse the data.
• Higgs boson is a hypothetical particle, existence of which would prove the standard model of particle physics.
• Higgs boson appears as a result of collision of two protons and large amount of energy. The protons in the LHC are accelerated enough to produce theoretically sufficient energy.
• Higgs boson is very heavy and thus unstable. In theory, it decays into either four muons or two photons. So, if there will be two counter-directed light beams registered by the detector, this will be an evidence of the boson. See the picture below for example of likely detector output in case of the boson shows up.
• Scientist say that if Higgs boson would not be detected, all the modern knowledge on particle physics will crush. They will be obliged to develop a new theory from scratch.
• There are no published results that report on Higgs boson detection so far...

Don't you want to be a theoretical physicist now? =)

Art + Multimedia

In December we visited a lecture about interaction between arts and sciences, namely between (primarily) visual arts and (again, primarily) multimedia studies (Russian page). The lecture was given by Asya Ablogina, who turned out to be a nice girl. Although she lacked any technical background, she was doing well. Surprisingly, there is a lot of artwork that exploits technical support in a witty manner, which I was unaware of, so the lousy stuff exhibited in the Moscow Museum of Modern Art is not everything one can do in this field!

The most interesting part was Asya's exposition of some masterpieces. Most of them were presented during 2009 Science as Suspense exhibit in Moscow (Russian). Nicolas Reeves is a famous Canadian architect, also known for his work on modelling biological systems. He used the output of biologically-inspired computer algorithms (such as genetic algorithms) to draw pictures. In Moscow he presented a project called Marching Floating Cubes: massive but light cubes float in the air. Their movements are controlled by tiny fans, although any little gust of wind can affect the movement. Each cube is equipped with an on-board computer, which helps to avoid collisions. The implemented algorithms are simple but stochastic, hence the behaviour is unpredictable. They are said to move like animal creatures. Here is the video:

A similar project was developed by Paul Granjon. He also tries to gift robots with animal behaviour. In the video below, robots are sexed, i.e. they are able to locate robots of the opposite sex and eventually end up in coitus. Another Paul's robot (the Smartbot, photo) creeps over the restricted space and always grumbles (just like Marvin!). I also enjoy the way Paul speaks:

The real idol in the sci-art community is Stelarc (see also his homepage, although it takes balls to get through the welcome page :). His talent is recognized by both scientists and artists (it is enough to mention he's an Honorary Professor of Arts and Robotics at Carnegie Mellon). He's best known for his experiments on his own body. For example, during a performance he allowed to control his body remotely over the Internet by muscle stimulation. Probably his favourite project is Prosthetic Head, which is in fact a 3D model of his own head. The head is learned from Stelarc's behaviour and speech, so it is able to communicate with people using colloquial information as well as non-verbal cues. It is interesting to try it in practice, but here is just a non-interactive demo video:

Stelarc is probably the only person on Earth who has three ears. In 2007 surgeons implanted an ear into his arm! It is not functioning now (just a piece of skin), but he plans to install an audio receiver into the ear and broadcast everything he hears over the Internet. I definitely recommend to look through his projects, there are decent ones.

Asya also presented her own works. She focus on different photographic techniques such as overexposure. Here are photos from her project Canvas of the Road compiled in a video. There is a play of words in Russian: the single word for canvas and road surface. On these photos car traces look like painter's strokes:

Another Asya's project is a short film series called Habitat. She showed only one episode with the title Habitat: MSU Main Building. The film was about a girl who is a PhD student in Moscow State University and described her place: a standard 8 sq.m. dorm room. The narrative was full of expressive epithets and metaphors about that awful room: the girl felt like in a cage, she also didn't like shared bathroom, etc. It's funny, because I live in a similar PhD student single room, and I'm quite happy with it, since I've been living 5 years in a shared room before. :)

The second lecturer, Vladimir Vishnyakov, presented his project called The Museum of Revived Photography. He used the photos of the XIX century to create image-based animation. The idea is technically simple: first segment people from the background, then use inpainting techniques to restore the background behind them, and animate the movements, but Vladimir referred to a programmer he works with as a real genius. In fact, all the stuff I post here varies from easy to moderately complex (except of Stelarc's projects), so you can do something similar too. The main problem is to come up with the concept. Come on then! ;)