Wildfire Risk Technology

Collecting and transforming data, consolidating and modelling predictions, and finally visualizing the product involved a variety of technological solutions.

Data Pipeline

To produce the data for the WildFire Risk Tool, we implemented an ingestion pipeline that pulls the data we need, integrates it and feeds it to a learning model that powers our recommendations.

At the extraction layer, we used Jupyter notebooks written in Python. Each data source was pulled and processed, taking advantage of APIs available to gather the data.

We used publicly available data provided by the following sources:

Once data cleaning and transformation was complete, the data was uploaded to Google Big Query, our Big Data solution. The data was either loaded directly, or by using Google Cloud Storage as intermediary storage. Once in Google Big Query, our data became available as tables which could be queried using SQL.

In Google Big Query, we consolidated the individual tables into a single table which was easier for creating machine learning models and visualization.

Modelling and Computation

For feature engineering and modelling, we loaded data from the consolidated table into a Google Compute Engine where we implemented modelling to output a prediction table. The prediction table, alongside the consolidated and detailed weather and transmission data, were then used to create our dashboard product.


To create our dashboards we used Tableau workbooks and stories. Our product was uploaded to Tableau Public and embedded into our website.

For our website and documentation, we’ve made use of Jekyll to move our text content into a static webpage to frame our Tableau dashboards. Jekyll makes the design of websites easy through themes, and this website makes use of the massively theme. Our website is hosted on GitHub Pages.