How to integrate Google Vision API in a React Native App

Reading time: 5 minutes

Google Vision API, also known as Cloud Vision API, is a machine learning tool that can classify details from images using thousands of different categories detected as individual objects in the image. It integrates a wide variety of optical vision features within an application, such as image labeling, face, landmark, logo and text detection, optical character recognition (OCR), and many more. 

This article aims to show you step-by-step how to integrate Google Cloud Vision API in a React Native application.

Setting up Firebase

How to integrate Google Vision API in a React Native App

First, we will need to set up a new Firebase project. This will save us an enormous amount of time and allow us to focus on our main goal by providing the database and back-end service.

1. Go to Firebase and sign in with your Google account.

2. At the top right corner, click on “Go to console”, then click on the “Create a project” button.

3. Set up Firebase database rules to upload image files through the app. In the menu on the left side, open “Database”, then choose “Rules” and modify them as explained below:

service cloud.firestore {
 match /databases/{database}/documents {
   match /{document=**} {
     allow read, write;

4. To install Firebase SDK in React Native app, use this:

yarn add firebase

5. You will need the following keys to bootstrap and hook Firebase with your application. Put them in a secret.js file.

export const FIREBASE_API_KEY = ‘XXX’
export const FIREBASE_PROJECT_ID = ‘XXX’

6. Create your storage database with default rules, then go to “Rules” and use the following:

service {
 match /b/{bucket}/o {
   match /{allPaths=**} {
     allow read, write: if request.auth == null;

Setting up Google Vision API

How to integrate Google Vision API in a React Native App

1. Sign in with your Gmail ID in the Google Cloud Console.

2. To create a project, click on “Select a Project” and then click “New Project”. Choose the name for your project and click “Create”. Back on the main page, select the project you have just created.

3. Move over to “Dashboard” and select “Enable APIs and Services”. Then type “vision” on the search bar, click on “Cloud Vision API”, and lastly, click “Enable”. To complete this process, you will be required to add billing information (don’t worry, you won’t get charged as long as you don’t surpass the free trial limits).

Building the app

1. Now, create a new folder called config, and under it create a new file firebase.js. Then import the keys from secret.js.

// firebase.js
import * as firebase from ‘firebase’;
export default firebase;

2. Create a new component Scanner.js, and paste the code in this Github link.

3. This is how billing works: each feature applied to an image is a billable unit. So if, for example, you apply Face Detection and Label Detection to the same image, you are billed for one unit of each. The first 1000 units used each month are free.

If you don’t want to make extra calls on features you don’t need, choose features and delete the rest from line 214 to line 223.

And that’s it! Now you are ready to integrate Google Vision API in a React Native Application. We hope this article can be helpful for you, and if you want to read more about React Native, keep on browsing our blog!