OpenVINO

From HandWiki - Reading time: 4 min

Short description: Toolkit for deploying inference neural network model on Intel hardware
OpenVINO
OpenVINO logo
OpenVINO logo
Original author(s)Intel Corporation,
Developer(s)Intel Corporation
Initial releaseMay 16, 2018; 6 years ago (2018-05-16)
Stable release
2023.3 / January 2024.[1]
Repositorygithub.com/openvinotoolkit/openvino
Written inC++, Python
Operating systemCross-platform
LicenseApache License 2.0
Websitedocs.openvino.ai

OpenVINO toolkit (Open Visual Inference and Neural network Optimization) is a free toolkit facilitating the optimization of a deep learning model from a framework and deployment using an inference engine onto Intel hardware.[2] The toolkit has two versions: OpenVINO toolkit, which is supported by open source community and Intel Distribution of OpenVINO toolkit, which is supported by Intel. OpenVINO was developed by Intel. The toolkit is cross-platform and free for use under Apache License version 2.0.[3] The toolkit enables a write-once, deploy-anywhere approach to deep learning deployments on Intel platforms, including CPU, integrated GPU, Intel Movidius VPU, and FPGAs.

Overview

The high level pipeline of OpenVINO consists of two parts: generate IR (Intermediate Representation) files via Model Optimizer using your trained model or public one and execute inference on Inference Engine on specified plugins (CPU, Intel Processor Graphics, VPU, GNA, Multi-Device plugin, Heterogeneous plugin)[4]

The toolkit’s Model Optimizer is a cross-platform tool that transforms a trained model from the original framework to OpenVINO format (IR) and optimizes it for future inference on supported devices. As a result, Model Optimizer produces two files: *.bin and *.xml, which contain weights and model structures respectively.

The toolkit’s Inference Engine is a C++ library for inferring input on devices and getting results. To better understand OpenVINO API there are a lot of samples, that demonstrate how to work with OpenVINO.

OpenVINO has different sample types: classification, object detection, style transfer, speech recognition, etc. It is possible to try inference on public models. There are a variety of models for tasks, such as:

  • classification
  • segmentation 
  • object detection 
  • face recognition 
  • human pose estimation 
  • monocular depth estimation
  • image inpainting
  • style transfer
  • action recognition
  • colorization

All these models are available for learning purpose or for development deep learning software. Open Model Zoo is licensed under Apache License version 2.0.

Along with the primary components of model optimization and runtime within Intel® Distribution of OpenVINO toolkit, the toolkit also includes a user-friendly web browser interface called the Deep Learning Workbench to aid in model analysis and experimentation; a tool called the Post-Training Optimization Tool to accelerate inference by converting models into low-precision and that do not require re-training (e.g., post-training quantization); and, additional add-ons, such as the Deep Learning Streamer to aid in streaming analytics pipeline interoperability, the OpenVINO Model Server to enable scalability via a serving microservice, Training Extensions like the Neural Network Compression Framework, and the Computer Vision Annotation Tool, an online interactive video and image annotation tool.

OpenVINO has two webpages: one for documentation another for downloads.

Supported frameworks and formats

Programming language

OpenVINO is written in C++ and Python.

OS support

OpenVINO runs on the following desktop operation systems: Windows, Linux and MacOS.

OpenVINO also runs on Raspberry Pi.[5]

See also

References

  • Agrawal, Vasu (2019). Ground Up Design of a Multi-modal Object Detection System (PDF) (MSc). Carnegie Mellon University Pittsburgh, PA. Archived (PDF) from the original on 26 January 2020.
  • Driaba, Alexander; Gordeev, Aleksei; Klyachin, Vladimir (2019). Recognition of Various Objects from a Certain Categorical Set in Real Time Using Deep Convolutional Neural Networks. Institute of Mathematics and Informational Technologies Volgograd State University. http://ceur-ws.org/Vol-2500/paper_5.pdf. Retrieved 26 January 2020. 
  • Nanjappa, Ashwin (31 May 2019). Caffe2 Quick Start Guide: Modular and scalable deep learning made easy. Packt. pp. 91–98. ISBN 978-1789137750. 




Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Software:OpenVINO
7 views | Status: cached on July 30 2024 02:07:53
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF