Press enter to see results or esc to cancel.

Project Highlight: SaveURPlanet Uses APIs to Calculate Carbon Footprint

Climate change is a serious issue on a lot of people’s minds these days. But we can’t all pull a Leonardo DiCaprio and make a record-breaking documentary on the subject. How does an everyday person make a difference?

Enter SaveUrPlanet.  This app, built in just 24 hours at MLHPrime’s Southwest Regional hackathon, measures a user’s carbon footprint in real-time and texts tips on how to bring carbon usage down based on their behaviors. The app itself looks something like this.

saveurplanet-app

Today, we’re highlighting how ,, and  build SaveURPlanet using the Clarifai, Microsoft Computer Vision and Twilio APIs. Read on for tutorials on how they did it!

captain-planet

The Mission

Reduce carbon emissions by inspiring people to change their day-to-day behavior. Build an app that automatically tracks a user’s carbon footprint day-to-day and prompts contextual tips to help lower it.

The Method

Overview:

First, the team focused on three areas where people generate the most carbon emissions: travel, food and household. To gather carbon emission data around these points, the team decided to build an app that does the following:

  • Food: Let users take pictures of their food to log its carbon impact
  • Travel:  Calculate miles traveled from Uber rides and flights and translate to emissions
  • Household: Log electricity bills

After this data was collected, the SaveURPlanet project wanted to compare the user’s emissions to that of the average American. The app interface would then allow a user to compare her emissions to the national average and to her past behaviors. Finally, the team wanted the app to send email updates with customized recommendations based on the user’s behavior.

That’s an awful lot of functionality for a 24 hour hackathon! Here’s how they pulled it off.

Drill Down:

One of the reasons we were particularly impressed with this project was its ability to collect and categorize data from different sources. We’ll break it down by the three categories that they got their data from.

Food Pictures and the Clarifai API

SaveURPlanet used the Clarifai API to allow their computer to translate the image of food into a list of ingredients. The Clarifai API has a Food Recognition model that returns 1) the list of ingredients that the picture has and 2) the probability that it identified those ingredients correctly.

clarifai-food-recognition-model

The SaveURPlanet team used RapidAPI’s to test and connect to the Clarifai API more quickly. Since the team programmed in different languages, the RapidAPI testing bay helped them call the API in their language of choice.

Want to test it out? It’s pretty fun…..

Get your favorite food pic ready and take these steps.

Step 1. Get the Clarifai Access Token and Credentials

Clarifai requires an access token before you start making calls to their API. No worries! It’s easy (and more importantly, free) to get one. Here’s how:

  1. Go to Clarifai’s developer page
  2. Sign up for an account
  3. Click the  Create Application button (or head to the  Developer Dashboard and click “Create a New Application”)
  4. Copy and save your client_id and client_secret
  5. Press the  Generate Access Token button

Voila! You should now have your  client_id, client_secret and Access Token for the Clarifai API.

Step 2. Call the API from RapidAPI

Next, head over to RapidAPI.com to run the API and start testing images! Here’s an overview of what you’ll need to do.

  1. Visit the Clarifai package page on RapidAPI
  2. Go to the “blocks” category and select the getTags endpoint
  3. Fill the getTags endpoint with relevant data
    • urls: Add the image URL in brackets and quotes ["http://IMAGE.jpg"]
    • model: Type in food-items-v1.0
    • accessToken: Copy the Access Token that you got from Step 1
  4. Log in and select your backend language
  5. Click “Test” to make call
  6. Check the code to see the list of ingredients

And there you have it! That’s how SaveURPlanet turned a picture of food into data that a program could actually use.

Bill Reading and the Microsoft Computer Vision API

To get data from household electricity bills and Lyft and Uber receipts, the team used the Microsoft Computer Vision API to separate text from the receipts.

The goal here was to separate the text from the image and input the data into the app. Luckily, the Microsoft Computer Vision API has a specific function for doing just that. Microsoft’s Computer Vision API has an Occiptal Character Recognition endpoint (also know as “OCR”) . To call this endpoint, all you have to do is register for a Microsoft Developer account, get the subscription key and call the OCR endpoint with RapidAPI.

Here’s how to call the Microsoft Computer Vision OCR endpoint. Try it out for yourself by testing an image with text on it (for example, your favorite image with a motivational quote).

Step 1. Get the Microsoft Computer Vision Subscription Key

To use Microsoft’s Computer Vison API, you’ll need a subscription key. Here’s how to get one:

  1. Go to the Microsoft Cognitive Services developer page
  2. Go to the Microsoft Computer Visoin API Services page
  3. Create Microsoft account or log in
  4. Choose “Computer Vision – Preview” to create new subscription
  5. In Key section choose Key1 or Key2* and press “show” or “copy”

The code you see is the Microsoft Computer Vison  subscriptionKey. I

*If, when you get to step 2, you find the subscription key not working, we recommending testing it again with the key that you didn’t pick this round (ex. try Key2 if you were originally testing with Key1).

Step 2. Call the Microsoft Computer Vision API from RapidAPI

This part should be familiar!  Head over to RapidAPI.com to start running images with the API.

  1. Go to the Microsoft Computer Vision API package page on RapidAPI
  2. Go to the “blocks” category and select the ocr endpoint
  3. Fill the ocr endpoint with relevant data
    • image: We recommend an image url here (if you need one to test, try this one)
    • language: You can leave this field blank–the API should detect the language in the picture
    • orientation: You can leave this field blank too! The API should detect the image orientation
  4. Log in and select your backend language
  5. Click “Test” to make call
  6. Check the code to see what the text says!

By connecting to this API, SaveURPlanet was able to read multiple transportion and household PDFs or images, and aggregate the numbers they needed to calculate a carbon score.

Tying It All Together: Adding the Notification Feature with the SparkPost and Twilio API

The app’s final function was to send custom alerts to the user about behaviors they could take to reduce their carbon emissions. Tips would be triggered by user behavior. For example, if a user rode multiple Ubers in a week, they may be prompted to bike more. If nothing the user did prompted a tip, the team would send a generic tip.

To send user tips, the SaveURPlanet team used SparkPost’s API to send users emails and the Twilio API to send text messages. If you want to see how to use the Twilio API, check out this article we wrote about it.

APIs Used

We were very impressed by how many APIs the SaveURPlanet team was able to use in a very short time. Here are the APIs that the team used to make this project possible. Click the RapidAPI links below to see the endpoints and start experimenting.

That’s it for us! What did you think of SaveURPlanet’s project? Do you have any hacks addressing climate change? Let us know in the comments below!

Comments

Leave a Comment

Tell us your thoughts!

Spread the API ❤️