Learn how to create a React Native Expo application with Raspberry Pi and MQTT! We will be using a Camera to send you images to your phone (simulated phone) in real-time with your Raspberry Pi. This can extend to various other full-stack applications.
Before we begin watch the perquisites.
Before reading the remainder, be sure to subscribe and support the channel if you have not!
An interactive version of this video and downloadable instructions available on Razzl on the App Store and Android Store which contains the code and PDF instructions. Cheers!
The code is also available on YouTube by signing up to Level 1 of my channel
The physical setup is simple just the camera attached to the Raspberry Pi
1-) Raspberry Pi Code
There are two scripts on the Pi you will need (shown in the video)
MQTT Subscriber (mqtt_subscriber.py)
This script subscribes to an MQTT channel, listening for commands to capture images. It continuously runs on the Raspberry Pi.
Connects to the local MQTT broker and subscribes to the "your/command/channel" for capture commands.
Upon receiving "Yes" command, triggers the capture function from an external module (s3_example).
Image Capture and AWS S3 Upload (s3_example.py)
This module captures images using PiCamera and uploads them to an AWS S3 bucket.
capture_image: Captures a still image.
capture_video: Captures a video (commented out).
upload_to_s3: Uploads the captured file to an AWS S3 bucket.
Main capture Function:
Captures an image, uploads it to S3, and publishes the S3 object key to the "your/result/channel" MQTT channel.
Run mqtt_subscriber.py on the Raspberry Pi to listen for capture commands.
Image Capture and S3:
Ensure required libraries are installed (pip install picamera boto3).
Use MQTT Publisher code to send "Yes" to the "your/command/channel," triggering image capture.
Raspberry Pi captures and uploads an image, publishing the S3 object key to "your/result/channel."
Make sure to watch Part 1 to learn how to setup Mosquitto on the Raspberry Pi.
Replace placeholders with actual MQTT broker, S3 bucket, and AWS credentials.
Adjust desktop storage path (desktop_path) as needed.
Video capture is included but commented out for optional use.
Investigate errors if "Something Happened!" is printed. Check logs for details.
This setup enables users to trigger image capture on the Raspberry Pi through MQTT commands, seamlessly integrating with AWS S3 for storage. Ensure proper configuration for successful execution.
2-) Backend Node JS Code
This code is an example of a Node.js application using the Express framework to create a simple server. The server exposes an endpoint (/takePhoto) that, when accessed, triggers a series of actions:
Write to File System:
The code creates a write stream (fileWriteStream) to save an image file locally (image.jpg).
It connects to an MQTT broker at the specified address (mqttBrokerAddress).
It subscribes to a channel (mqttSubscribeChannel) and publishes a message ('Yes') to another channel (mqttPublishChannel).
Handling MQTT Responses:
Listens for a response on the subscribed channel (mqttSubscribeChannel).
When a response is received, it unsubscribes from the channel.
Retrieves an object from an AWS S3 bucket (raspberrypi-app) using the received key.
Writes the data from the S3 object to the local file system (image.jpg).
Sending a JSON Response:
Converts the image data to Base64.
Sends a JSON response to the client with the Base64-encoded image.
Handles errors during the process, logging them to the console and sending appropriate HTTP status codes and error messages.
The Express app starts listening on port 3000.
Important Points for Explanation:
Port Configuration: You can change the port variable to the desired port for your server.
MQTT Configuration: Replace the values of mqttBrokerAddress, mqttPublishChannel, and mqttSubscribeChannel with your MQTT broker's address and desired channels.
AWS S3 Configuration: The AWS S3 client is configured with access credentials. Ensure you replace the dummy credentials (accessKeyId and secretAccessKey) with your AWS IAM user's credentials. (ALWAYS KEEP THIS INFORMATION PRIVATE)
The server exposes a single endpoint (/takePhoto) for taking a photo.
The response contains a JSON object with the Base64-encoded image which is processed by the React Native Frontend.
This is an extension of the code shown in the part 2 tutorial. All you need is a node backend running and you can just copy the index file from the files in this project to get it running. There are many more details in the video if you are confused.
3-) Frontend React Native Code
This React Native Expo frontend seamlessly interacts with a Node.js backend to capture and display photos. Key points include:
Triggers the backend's takePhoto function by sending a GET request to /takePhoto using Axios.
Communicates with the backend server to retrieve a Base64-encoded image.
Displays an activity indicator during the photo capture process to provide user feedback.
Renders the received image data in the UI using the Image component.
Once again this is a very simple expo project as discussed in the video. Be sure to have expo installed on your PC and to create an expo project with 'expo init' before copying the code from the App.js file! Once that is done you should be able to run it after npm installing the packages. I simulate it on my laptop shown in the screenshot!
You can further extend this work and add functionality to the application. This is a very basic application. Be sure to configure the styling as you like and add features accordingly. Make sure you subscribe to the channel if you enjoyed the tutorial. Let me know if you have any questions.