Skip to content

Commit

Permalink
Fix broken links
Browse files Browse the repository at this point in the history
  • Loading branch information
Roman Donchenko committed Dec 5, 2019
1 parent 44dee24 commit 811bb59
Show file tree
Hide file tree
Showing 4 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion demos/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The Open Model Zoo includes the following demos:
- [Interactive Face Recognition Python* Demo](./python_demos/face_recognition_demo/README.md) - Face Detection coupled with Head-Pose, Facial Landmarks and Face Recognition detectors. Supports video and camera inputs.
- [Mask R-CNN C++ Demo for TensorFlow* Object Detection API](./mask_rcnn_demo/README.md) - Inference of instance segmentation networks created with TensorFlow\* Object Detection API.
- [Multi-Camera Multi-Person Tracking Python* Demo](./python_demos/multi_camera_multi_person_tracking/README.md) Demo application for multiple persons tracking on multiple cameras.
- [Multi-Channel Face Detection C++ Demo](./multichannel_demo/README.md) - Simultaneous Multi Camera Face Detection demo.
- [Multi-Channel C++ Demos](./multi_channel/README.md) - Several demo applications for multi-channel scenarios.
- [Object Detection for Faster R-CNN C++ Demo](./object_detection_demo_faster_rcnn/README.md) - Inference of object detection networks like Faster R-CNN (the demo supports only images as inputs).
- [Object Detection for SSD C++ Demo](./object_detection_demo_ssd_async/README.md) - Demo application for SSD-based Object Detection networks, new Async API performance showcase, and simple OpenCV interoperability (supports video and camera inputs).
- [Object Detection for YOLO V3 C++ Demo](./object_detection_demo_yolov3_async/README.md) - Demo application for YOLOV3-based Object Detection networks, new Async API performance showcase, and simple OpenCV interoperability (supports video and camera inputs).
Expand Down
4 changes: 2 additions & 2 deletions demos/multi_channel/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Multi-Channel C++ Demos

The demos provide an inference pipeline for three multi-channel scenarios: face detection, human pose estimation and object detection yolov3. For more information, refer to the corresponding pages:
* [Multi-Channel Face Detection C++ Demo](./face_detection/README.md)
* [Multi-Channel Human Pose Estimation C++ Demo](./human_pose_estimation/README.md)
* [Multi-Channel Face Detection C++ Demo](./face_detection_demo/README.md)
* [Multi-Channel Human Pose Estimation C++ Demo](./human_pose_estimation_demo/README.md)
* [Multi-Channel Object Detection Yolov3 C++ Demo](./object_detection_demo_yolov3/README.md)
2 changes: 1 addition & 1 deletion tools/accuracy_checker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ In order to evaluate some models required frameworks have to be installed. Accur

- [OpenVINO](https://software.intel.com/en-us/openvino-toolkit/documentation/get-started).
- [Caffe](accuracy_checker/launcher/caffe_installation_readme.md).
- [MXNet](https://mxnet.incubator.apache.org/versions/master/).
- [MXNet](https://mxnet.apache.org/).
- [OpenCV DNN](https://docs.opencv.org/4.1.0/d2/de6/tutorial_py_setup_in_ubuntu.html).
- [TensorFlow](https://www.tensorflow.org/).
- [ONNX Runtime](https://github.com/microsoft/onnxruntime/blob/master/README.md).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ For enabling Caffe launcher you need to add `framework: caffe` in launchers sect
* `device` - specifies which device will be used for infer (`cpu`, `gpu_0` and so on).
* `model` - path to prototxt file with Caffe model for your topology.
* `weights` - path to caffemodel file with weights for your topology.
* `adapter` - approach how raw output will be converted to representation of dataset problem, some adapters can be specific to framework. You can find detailed instruction how to use adapters [here](../adapters/README.md]).
* `adapter` - approach how raw output will be converted to representation of dataset problem, some adapters can be specific to framework. You can find detailed instruction how to use adapters [here](../adapters/README.md).

You also can specify batch size for your model using `batch` and allow to reshape input layer to data shape, using specific parameter: `allow_reshape_input` (default value is False).

Expand Down

0 comments on commit 811bb59

Please sign in to comment.