Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions docs/get-started/project-templates.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,14 @@ Project templates also provide a starting point for your own projects - our proj

[Explore :octicons-arrow-right-24:](../tutorials/clickstream/overview.md)

- __Predictive maintenance__

---

![Predictive maintenance pipeline](../images/project-templates/predictive-maintenance-pipeline.png)

Predicts failures in 3D printers.

[Explore :octicons-arrow-right-24:](../tutorials/predictive-maintenance/overview.md)

</div>
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions docs/tutorials/currency-alerting/currency-alerting.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,13 +145,13 @@ This microservice reads from the `currency-rate-alerts` topic and whenever a new

It also reads the contents of the message and enriches the notification with details on how the threshold was crossed, that is, whether the price is moving up or down.

To set up the push nonfiction microservice, follow these steps:
To set up the push notification microservice, follow these steps:

1. Click on the `Code Samples` icon in the left-hand navigation.

2. In the search box on the Code Samples page, enter "Pushover".

You will see the `Threshold Alert` sample appear in the search results:
You will see the `Pushover Output` sample appear in the search results:

![Pushover Notifications](./images/library-pushover.png "Pushover Notifications")

Expand Down
12 changes: 12 additions & 0 deletions docs/tutorials/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,18 @@ Some tutorials use [project templates](../get-started/project-templates.md) - th

[Explore :octicons-arrow-right-24:](../tutorials/clickstream/overview.md)

- __Predictive maintenance__

---

![Predictive maintenance pipeline](../images/project-templates/predictive-maintenance-pipeline.png)

`Project template`

Predicts failures in 3D printers.

[Explore :octicons-arrow-right-24:](../tutorials/predictive-maintenance/overview.md)

- __Train and deploy machine learning (ML)__

---
Expand Down
104 changes: 104 additions & 0 deletions docs/tutorials/predictive-maintenance/alert-service.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
# Alert service

Sends alerts to an output topic when the temperature is under or over the threshold.

It receives data from two topics (3d printer data and forecast) and triggers an alert (to output topic `alerts`) if the temperature is under or over the threshold.

![pipline section](./images/alert-pipeline-segment.png)

The default thresholds are as shown in the following table:

| Variable | Default value (degrees C)|
|----|----|
| min_ambient_temperature | 45|
| max_ambient_temperature | 55 |
| min_bed_temperature | 105 |
| max_bed_temperature | 115 |
| min_hotend_temperature | 245 |
| max_hotend_temperature | 255 |

These thresholds are used to determine if the temperature, or forecast temperature, are under or over the threshold values. If so these alerts are published to the `alerts` topic.

Note there are different alert types, with the message format for the `no-alert` type alert:

``` json
{
"status": "no-alert",
"parameter_name": "hotend_temperature",
"message": "'Hotend temperature' is within normal parameters",
"alert_timestamp": 1701280033000000000,
"alert_temperature": 246.04148121958596
}
```

An example of the `under-now` alert message format:

``` json
{
"status": "under-now",
"parameter_name": "bed_temperature",
"alert_timestamp": 1701273328000000000,
"alert_temperature": 104.0852349596566,
"message": "'Bed temperature' is under the threshold (105ºC)"
}
```

Here's an `over-forecast` alert message format:

``` json
{
"status": "over-forecast",
"parameter_name": "forecast_fluctuated_ambient_temperature",
"alert_temperature": 55.014602460947586,
"alert_timestamp": 1701278280000000000,
"message": "'Ambient temperature' is forecasted to go over 55ºC in 1:36:29."
}
```

Here's the `under-forecast` alert message format:

``` json
{
"status": "under-forecast",
"parameter_name": "forecast_fluctuated_ambient_temperature",
"alert_temperature": 44.98135836928914,
"alert_timestamp": 1701277320000000000,
"message": "'Ambient temperature' is forecasted to fall below 45ºC in 1:20:28."
}
```

These alerts are subscribed to by the Printers dashboard service, and the alerts can be displayed in real time on the scrolling charts, as well as the scrolling alert display:

![Alerts](./images/alerts-display.png)

## Check the log messages

It can be very useful to check the logs for a service. To do this from the pipeline view:

1. Click on Alert Service in the pipeline view.

2. Click the `Logs` tab.

You can now view the log messages:

![Log messages](./images/alert-service-logging.png)

## View the message format

You can also view the actual messages being transferred through the service:

1. Click the `Messages` tab.

2. You can now select either the input or oputput topic as required from the topic drop down:

![Topic drop down](./images/messages-topic-dropdown.png)

3. You can now explore the messages. Click on a message to display it:

![Message format](./images/message-format.png)

Make sure you click the `Live` tab to continue viewing live messages.

## 🏃‍♀️ Next step

[Part 6 - InfluxDB raw data service :material-arrow-right-circle:{ align=right }](./influxdb-raw-data.md)
111 changes: 111 additions & 0 deletions docs/tutorials/predictive-maintenance/data-generator.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
# Data generator

This service generates temperature data simulating one or more 3D printers. It simulates three temperature sensors on a fleet of 3D printers.

![data generator pipeline segment](./images/data-generator-pipeline-segment.png)

For each printer, the enclosure temperature is programmed to decrease starting at the 4 hour point. It will drop below the minimum threshold of 45°C at 5h:47m — the failure point.

The simulation speed is 10x actual spped, so the temperature will start to drop at approximately 24 minutes and cross the minimum threshold at around 34m 44s.

When printing with a heat-sensitive material such as ABS (Acrylonitrile Butadiene Styrene), it’s important to ensure that the temperatures remain stable.

The [forecasting algorithm](./forecast-service.md) that attempts to estimate when this is going to happen, and displays the alert on a dashboard.

## Data published

The generated data is published to the `3d-printer-data` topic:

* Ambient temperature
* Ambient temperature with fluctuations
* Bed temperature
* Hot end temperature
* original_timestamp
* Printer finished printing

This service runs continually.

## Exploring the message format

If you click `Topics` in the main left-hand navigation you see the topics in the environment. Click in the `Data` area to view live data. This takes you into the Quix data explorer. You can then select the stream and parameter data you'd like to explore. You can then view this data in either the `Table` or `Messages` view.

If you look at the messages in the `Messages` view, you'll see data has the following format:

``` json
{
"Epoch": 0,
"Timestamps": [
1701277527000000000
],
"NumericValues": {
"hotend_temperature": [
250.8167407832582
],
"bed_temperature": [
106.9299672495977
],
"ambient_temperature": [
36.92387946005222
],
"fluctuated_ambient_temperature": [
36.92387946005222
]
},
"StringValues": {
"original_timestamp": [
"2023-11-29 17:05:27"
]
},
"BinaryValues": {},
"TagValues": {
"printer": [
"Printer 72"
]
}
}
```

The Quix data explorer is a very useful tool for debugging and monitoring your pipeline.

## Viewing the deployed application

In the left-hand main navigation, click `Deployments` to see all the deployed services and jobs in the environment. Click `Data Generator` to select the deployment. This takes you to an extremely useful screen where you can:

1. View the status of the deployment (such as CPU, memory usage, and replicas assigned).
2. See the live logs for the service.
3. See the topic lineage for the service.
4. Access Build logs (in case of errors when the service is built).
5. Access the Messages tab, where you can then see messages associated with the service in real time.

## Viewing the application code

There are many ways to view the code for the application (which is then deployed as a job or service). The quickest way from the current screen is to click the area shown:

![Go to code view](./images/data-generator-deployment-code-view.png)

You'll now be in the code view with the **version of the deployed code** displayed.

Review the code, you'll see that data is generated for each printer, and each printer has its own stream for generated data:

``` python
tasks = []
printer_data = generate_data()

# Distribute all printers over the data length
delay_seconds = int(os.environ['datalength']) / replay_speed / number_of_printers

for i in range(number_of_printers):
# Set stream ID or leave parameters empty to get stream ID generated.
name = f"Printer {i + 1}" # We don't want a Printer 0, so start at 1

# Start sending data, each printer will start with some delay after the previous one
tasks.append(asyncio.create_task(generate_data_and_close_stream_async(topic_producer, name, printer_data.copy(), delay_seconds * i)))

await asyncio.gather(*tasks)
```

Feel free to explore the code further.

## 🏃‍♀️ Next step

[Part 3 - Downsampling service :material-arrow-right-circle:{ align=right }](./downsampling.md)
53 changes: 53 additions & 0 deletions docs/tutorials/predictive-maintenance/downsampling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# Downsampling

This service reduces the sampling rate of data from one per second to one per minute.

![Downsampling pipeline segment](./images/downsampling-pipeline-segment.png)

The service uses a buffer to buffer data for one minute before releasing.

``` python
# buffer 1 minute of data
buffer_configuration = qx.TimeseriesBufferConfiguration()
buffer_configuration.time_span_in_milliseconds = 1 * 60 * 1000
```

During the buffering the data is aggregated in the dataframe handler:

``` python
def on_dataframe_received_handler(originating_stream: qx.StreamConsumer, df: pd.DataFrame):
if originating_stream.properties.name is not None and stream_producer.properties.name is None:
stream_producer.properties.name = originating_stream.properties.name + "-down-sampled"

# Identify numeric and string columns
numeric_columns = [col for col in df.columns if not col.startswith('TAG__') and
col not in ['time', 'timestamp', 'original_timestamp', 'date_time']]
string_columns = [col for col in df.columns if col.startswith('TAG__')]

# Create an aggregation dictionary for numeric columns
numeric_aggregation = {col: 'mean' for col in numeric_columns}

# Create an aggregation dictionary for string columns (keeping the last value)
string_aggregation = {col: 'last' for col in string_columns}

# Merge the two aggregation dictionaries
aggregation_dict = {**numeric_aggregation, **string_aggregation}

df["timestamp"] = pd.to_datetime(df["timestamp"])

# resample and get the mean of the input data
df = df.set_index("timestamp").resample('1min').agg(aggregation_dict).reset_index()

# Send filtered data to output topic
stream_producer.timeseries.buffer.publish(df)
```

You can read more about using buffers in the [buffer documentation](https://quix.io/docs/quix-streams/v0-5-stable/subscribe.html#using-a-buffer).

The aggregated data is published to the output stream (one stream for each printer).

The output topic for the service is `downsampled-3d-printer-data`. Other services such as the Forecast service, and the InfluxDB raw data storage service subscribe to this topic.

## 🏃‍♀️ Next step

[Part 4 - Forecast service :material-arrow-right-circle:{ align=right }](./forecast-service.md)
68 changes: 68 additions & 0 deletions docs/tutorials/predictive-maintenance/forecast-service.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Forecast service

Generates a forecast for the temperature data received from the input topic. This is to predict a potential failure condition, when the ambient temperature of the 3D printer drops below the minimum threshold for successful ABS-based printing.

![Forecast pipeline segment](./images/forecast-pipeline-segment.png)

The forecast is made using the downsampled data as the input, and using the scikit-learn library. The forecasts are published to the `forecast` topic. The Alert service and Printers dashboard service both subscribe to this topic.

## Data format

The forecast data format is:

```json
{
"Epoch": 0,
"Timestamps": [
1701284880000000000,
1701284940000000000,
1701285000000000000,
...
1701313620000000000
],
"NumericValues": {
"forecast_fluctuated_ambient_temperature": [
42.35418149532191,
42.43955555085827,
42.52524883234062,
...
119.79365961797913
]
},
"StringValues": {},
"BinaryValues": {},
"TagValues": {
"printer": [
"Printer 19-down-sampled",
"Printer 19-down-sampled",
"Printer 19-down-sampled",
...
"Printer 19-down-sampled"
]
}
}
```

## Prediction algorithm

The work of the prediction is carried out by the `scikit-learn` library, using a quadratic polynomial (second order) linear regression algorithm:

``` python
forecast_input = df[parameter_name]

# Define the degree of the polynomial regression model
degree = 2
# Create a polynomial regression model
model = make_pipeline(PolynomialFeatures(degree), LinearRegression())
# Fit the model to the data
model.fit(np.array(range(len(forecast_input))).reshape(-1, 1), forecast_input)
# Forecast the future values
forecast_array = np.array(range(len(forecast_input), len(forecast_input) + forecast_length)).reshape(-1, 1)
forecast_values = model.predict(forecast_array)
# Create a DataFrame for the forecast
fcast = pd.DataFrame(forecast_values, columns=[forecast_label])
```

## 🏃‍♀️ Next step

[Part 5 - Alert service :material-arrow-right-circle:{ align=right }](./alert-service.md)
Loading