In this guide, we will explore the Document Scanner features of the Dynamsoft Capture Vision SDK.
- Supported Version: 0.71.0 or higher
- Supported OS: Android 5.0 (API Level 21) or higher.
- Supported ABI: armeabi-v7a, arm64-v8a, x86 and x86_64.
- Development Environment: Android Studio 2022.2.1 or higher.
- Supported OS: iOS 11+ (iOS 13+ recommended).
- Supported ABI: arm64 and x86_64.
- Development Environment: Xcode 13+ (Xcode 14.1+ recommended).
- Node: 18 or higher
Run the following commands in the root directory of your react-native project to add dynamsoft-capture-vision-react-native
into dependencies
# using npm
npm install dynamsoft-capture-vision-react-native
# OR using Yarn
yarn add dynamsoft-capture-vision-react-native
then run command to install all dependencies:
# using npm
npm install
# OR using Yarn
yarn install
For iOS, you must install the necessary native frameworks from CocoaPods by running the pod install command as below:
cd ios
pod install
The Dynamsoft Capture Vision SDK needs the camera permission to use the camera device, so it can capture from video stream.
For Android, we have defined camera permission within the SDK, you don't need to do anything.
For iOS, you need to include the camera permission in ios/your-project-name/Info.plist
inside the <dist>
element:
<key>NSCameraUsageDescription</key>
<string></string>
Now that the package is added, it's time to start building the document scanner component using the SDK.
The first step in code configuration is to initialize a valid license via LicenseManager.initLicense
.
import {LicenseManager} from 'dynamsoft-capture-vision-react-native';
LicenseManager.initLicense("your-license-key")
.then(()=>{/*Init license successfully.*/})
.catch(error => console.error("Init License failed.", error));
Note:
- The license string here grants a time-limited free trial which requires network connection to work.
- You can request a 30-day trial license via the Request a Trial License link.
Before opening camera to start document scanning, you need to request camera permission from system.
import {CameraEnhancer} from 'dynamsoft-capture-vision-react-native';
CameraEnhancer.requestCameraPermission();
The basic workflow of scanning a document from video stream is as follows:
- Initialize the
CameraEnhancer
object - Initialize the
CaptureVisionRouter
object - Bind the
CameraEnhancer
object to theCaptureVisionRouter
object - Register a
CapturedResultReceiver
object to listen for scanned document via the callback functiononNormalizedImagesReceived
- Open the camera
- Start document scanning via
startCapturing
import React, {useEffect, useRef, useState} from 'react';
import {
CameraEnhancer,
CameraView,
RecognizedTextLinesResult,
CaptureVisionRouter,
EnumPresetTemplate,
ParsedResult, DetectedQuadsResult, NormalizedImagesResult, imageDataToBase64
} from 'dynamsoft-capture-vision-react-native';
export function Scanner() {
const cameraView = useRef<CameraView>(null); // Create a reference to the CameraView component using useRef.
const camera = CameraEnhancer.getInstance(); //Get the singleton instance of CameraEnhancer
const router = CaptureVisionRouter.getInstance(); //Get the singleton instance of CaptureVisionRouter
useEffect(() => {
router.setInput(camera); //Bind the CaptureVisionRouter and ImageSourceAdapter before router.startCapturing()
camera.setCameraView(cameraView.current!!); //Bind the CameraEnhancer and CameraView before camera.open()
/**
* Adds a CapturedResultReceiver object to listen the captured result.
* In this sample, we only listen onDetectedQuadsReceived or onNormalizedImagesReceived generated by Dynamsoft Document Normalizer module.
* */
let resultReceiver = router.addResultReceiver({
//If start capturing with EnumPresetTemplate.PT_DETECT_AND_NORMALIZE_DOCUMENT,
//NormalizedImagesResult will be received on this callback.
onNormalizedImagesReceived: (result: NormalizedImagesResult) => {
//Handle the `result`.
if (result.items && result.items.length > 0) {
let normalizeImageBase64 = imageDataToBase64(result.items[0].imageData)
//...
}
},
});
/**
* Open the camera when the component is mounted.
* Please remember to request camera permission before it.
* */
camera.open();
/**
* Start capturing when the component is mounted.
* In this sample codes, we start capturing by using EnumPresetTemplate.PT_DETECT_AND_NORMALIZE_DOCUMENT template.
* */
router.startCapturing(EnumPresetTemplate.PT_DETECT_AND_NORMALIZE_DOCUMENT);
return () => {
//Remove the receiver when the component is unmounted.
router.removeResultReceiver(resultReceiver);
//Close the camera when the component is unmounted.
camera.close();
//Stop capturing when the component is unmounted.
router.stopCapturing();
}
}, [camera, router, cameraView]);
return (
<CameraView style={{flex: 1}} ref={cameraView}>
{/* you can add your own view here as the children view of CameraView */}
</CameraView>
);
}
If you want to detect document boundary and adjust the boundary manually, you can startCapturing
with EnumPresetTemplate.PT_DETECT_DOCUMENT_BOUNDARIES
template. The DetectedQuadsResult
will then be received through the onDetectedQuadsReceived
callback. You can use the Editor component to learn how to draw DetectQuadResult
on the original image and interactively edit the quads..
The full sample code is available here.
Go to your project folder, open a new terminal and run the following command:
# using npm
npm run android
# OR using Yarn
yarn android
- Open the workspace file
*.xcworkspace
(not .xcodeproj) from theios
directory in Xcode. - Adjust Provisioning and Signing settings.
# using npm
npm run ios
# OR using Yarn
yarn ios
If everything is set up correctly, you should see your new app running on your device. This is one way to run your app — you can also run it directly from within Android Studio and Xcode respectively.
How to enable new architecture in Android
How to enable new architecture in iOS
Note:
If you enable new architecture and want to run Android via
Windows
, You may encounter some build errors due to theWindows Maximum Path Length Limitation
.Therefore, we recommend that you move the project to a directory with a shorter path before enable the new architecture.
- You can request a 30-day trial license via the Request a Trial License link.