VIRAT

From Wikipedia - Reading time: 3 min

Concept diagram of the VIRAT system, from the DARPA project solicitation[1]

The Video and Image Retrieval and Analysis Tool (VIRAT) program is a video surveillance project funded by the Information Processing Technology Office of the Defense Advanced Research Projects Agency (DARPA).[2][3][4]

The purpose of the program was to create a database that could store large quantities of video, and make it easily searchable by intelligence agents to find "video content of interest" (e.g. "find all of the footage where three or more people are standing together in a group") -- this is known as "content-based searching".[1]

The other primary purpose was to create software that could provide "alerts" to intelligence operatives during live operations (e.g. "a person just entered the building").[1]

The focus of VIRAT is primarily on footage from UAVs such as the MQ-1 Predator. As of the writing of the project solicitation in March 2008, most analysis of drone footage is done in a very labor-intensive manner by humans, who have to do manual "fast-forwarding" searches through video, or perform search queries of metadata or annotations added to videos earlier. The goal of VIRAT is to change all of this and have a large portion of the burden taken off of humans, and automating the analysis of surveillance video.[1]

VIRAT will[when?] focus heavily on developing means to be able to search through databases containing thousands of hours of video, looking for footage where certain types of activities took place, such as:[1]

Diagram of example operation using VIRAT system, (from the DARPA project solicitation[1])
  • Single person: Digging, loitering, picking up, throwing, exploding/burning, carrying, shooting, launching, walking, limping, running, kicking, smoking, gesturing
  • Person-to-person: Following, meeting, gathering, moving as a group, dispersing, shaking hands, kissing, exchanging objects, kicking, carrying an object together
  • Person-to-vehicle: Driving, getting-in (out), loading (unloading), opening (closing) trunk, crawling under car, breaking window, shooting/launching, exploding/burning, dropping off, picking up
  • Person-to-facility: Entering (exiting), standing, waiting at checkpoint, evading checkpoint, climbing atop, passing through gate, dropping off
  • Vehicle: Accelerating (decelerating), turning, stopping, overtaking/passing, exploding/burning, discharging, shooting, moving together, forming into convoys, maintaining distance
  • Other: VIP activities (convoy, parade, receiving line, troop formation, speaking to crowds), riding/leading animal, bicycling

There are already highly developed object detection systems (e.g. programs that can determine whether an object in video footage is a "car" or a "person wearing a backpack"). VIRAT will utilize what is currently available for object detection. It is not within the scope of VIRAT to fund research in object detection, unless it is somehow related to identifying certain types of activities, like those mentioned above.[1]

The DARPA program manager for the VIRAT project is Dr. Mita Desai.

See also

[edit]

References

[edit]
  1. ^ a b c d e f g "BAA-08-20: Video and Image Retrieval and Analysis Tool (VIRAT)". Information Processing Technology Office. March 3, 2008. Retrieved 2012-11-01.
  2. ^ Sanchez, Julian (21 October 2008). "DARPA building search engine for video surveillance footage". Ars Technica. Retrieved 2009-06-18.
  3. ^ "DARPA Wants VIBRANT Results From VIRAT For UAV Data". SatNews (Industry Publication). September 29, 2008. Retrieved 2009-06-18.
  4. ^ Pincus, Walter (October 20, 2008). "DARPA Contract Description Hints at Advanced Video Spying". The Washington Post. pp. A13. Retrieved 2009-06-30.
[edit]

Licensed under CC BY-SA 3.0 | Source: https://en.wikipedia.org/wiki/VIRAT
22 views |
Download as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF