Particle to Panoply

This page provides you with instructions on how to extract data from Particle and load it into Panoply. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)

What is Particle?

Particle allows businesses to bring their Internet of Things (IoT) products to market faster. It provides a secure, easy-to-use, full-stack IoT cloud platform and low-cost connected hardware.

What is Panoply?

Panoply is a fully managed data warehouse service that can spin up an Amazon Redshift instance in just a few clicks. It uses machine learning and natural language processing (NLP) to learn, model, and automate standard data management activities from source to analysis. It can import data with no schema, no modeling, and no configuration. With Panoply, you can use your favorite analysis, SQL, and visualization tools just as you would if you were creating a Redshift data warehouse on your own.

Getting data out of Particle

Particle exposes events through webhooks. To use webhooks, log into your Particle console and click on the Integrations tab, then click New Integration > Webhook. Set the event name to the item you want to track; it's good practice to specify the name of the field where you want the data to live in your data warehouse. Set the URL to the key or token that you'll use to accept the data. Leave the request type as POST. In the device field, select the device you want to trigger the webhook. Finally, click Create Webhook.

Sample Particle data

Particle sends data in JSON format via webhook through a POST request whenever an event triggers it to do so. The JSON fields and endpoints will match the data collected by your form. For instance:

{
    "event": [event-name],
    "data": [event-data],
    "published_at": [timestamp],
    "coreid": [device-id]
}

Loading data into Panoply

Once you have identified all of the columns you want to insert, you can use the CREATE TABLE statement in Panoply's Redshift data warehouse to create a table to receive all of the data.

With a table built, it may seem like the easiest way to migrate your data (especially if there isn't much of it) is to build INSERT statements to add data to your Redshift table row by row. If you have any experience with SQL, this will be your gut reaction. But beware! Redshift isn't optimized for inserting data one row at a time. If you have a high volume of data to be inserted, you would be better off loading the data into Amazon S3 and then using the COPY command to load it into Redshift.

Keeping Particle data up to date

Once you've coded up a script or written a program to get the data you want and move it into your data warehouse, you're going to have to maintain it. If Particle modifies its API, or sends a field with a datatype your code doesn't recognize, you may have to modify the script. If your users want slightly different information, you definitely will have to.

Other data warehouse options

Panoply is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Amazon Redshift, Google BigQuery, PostgreSQL, or Snowflake, which are RDBMSes that use similar SQL syntax. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To BigQuery, To Postgres, and To Snowflake.

Easier and faster alternatives

If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.

Thankfully, products like Stitch were built to solve this problem automatically. With just a few clicks, Stitch starts extracting your Particle data via the API, structuring it in a way that is optimized for analysis, and inserting that data into your Panoply data warehouse.