Game analysis is an important task in sports, especially in team sports. The objective is to analyze the behavior (strengths, weaknesses) of other teams. Currently, this is mainly a manual task, done by coaches or dedicated game analysts based on videos of the teams to be analyzed.
The goal of the SportSense project is twofold. First, we apply novel algorithms to detect semantic events within continuous streams of data. Wearable sensors are more and more becoming widely available in team sports. However, these sensors only produce basic information (such as the position and speed of a player). A big challenge is to derive higher semantic events out of these streams of data. This can be statistics on ball possession, but also a more sophisticated analysis of the tactical behavior of the entire team. Second, we apply novel sketch- and content-based approaches to video retrieval to support the tasks of coaches and game analysts. The objective is to facilitate the search for scenes in sport videos. In most cases, queries are formulated on the basis of specific motion characteristics the coach is interested in or remembers from previous views of the video. Providing sketching interfaces for graphically specifying query input is thus a very natural user interaction for a retrieval application. However, the quality of the query (the sketch) heavily depends on the memory of the user and her ability to accurately formulate the intended search query by transforming this 3D memory of the known item(s) into 2D sketch query. Therefore, appropriate user interfaces have to be provided that are easy to use and that allow for the specification of rough sketches. The retrieval back-end then needs to execute these queries by allowing a certain degree of tolerance between the query sketch and the actual motion.