Monitoring in SpringBoot 2.0: Micrometer + InfluxDB + Chronograf

Build your mini-monitoring tool for SpringBoot application

Source: Google Images

Gathering relevant information about our application’s performance and health helps in ensuring the system stability. If we get to know the performance metrics of our application, we can take specific measures to enhance the performance or analyse the data for the given metrics and prepare reports. Most enterprises use time-series based databases to store these metrics. In this tutorial we will see how to set up a monitoring system for our SpringBoot application.

In SpringBoot 2.0, we have Actuators to monitor our metrics. Actuator provides us with http endpoints where we can see the application data such spring beans, liquibase migrations, health status, metrics.

Add the following dependency in you pom.xml file to enable actuators.


When you run your application and hit localhost:8080/actuator/health you will get the health status of your application. We will be using /actuator/metrics for our purpose. All the system metrics by Spring are pushed here. We will also push our custom monitoring metrics on this path.


As Spring states, Micrometer is a dimensional-first metrics collection tool which helps you time, count, gauge your code. With very little configurations we can export these metrics to one or many monitoring systems (prometheus, influx etc). We will be adding the following dependency to get started with micrometer.


By adding micrometer-registry-influx we automatically enable exporting data to InfluxDB.

PizzaDelivery Application

Let’s develop an application for a pizza delivery store, for example PizzaHut. The application will expose an api where partners can put orders for a pizza. Suppose our partners are online food ordering apps like UberEats, Zomato, Swiggy etc. Our application will receive orders, puts them in processing and then marks them successful once dispatched for delivery.

Set up a SpringBoot application by the spring initializr. Add the above dependencies. We will start with a PizzaOrderController.

import io.micrometer.core.instrument.Counter;
import io.micrometer.core.instrument.Metrics;
public class PizzaOrderController {
@RequestMapping(value = “create”, method = RequestMethod.POST)
public long createOrder(@RequestParam String partnerId,
@RequestParam String type,@RequestParam String location) {
long orderId = createOrder(type);
increaseCount(partnerId, “received”);
return orderId;
private void increaseCount(String partnerId, String state) {
// Counter class stores the measurement name and the tags and
// their values
Counter counter =Metrics.counter(“request.orders”, “partnerId”,
partnerId, “state”, state);
private long createOrder(String type) {
// create order
return (long) Math.random();
private void processOrders() {
List<Order> orders = getReceivedOrders();
int processedOrders = 0;
orders.forEach(order -> {
bool processed = processOrder(order);
increaseCount(order.getPartnerId(), “processed”);

In the above code, we have created an api for receiving orders from our partners. Partner will send us their id, pizza type and location of delivery. We will send them an order id in response.

We want to keep a metric for orders received at a particular moment so that we can see for any spikes. We create Counter which has a measurement (“request.orders”) which can be considered as table (for SQL people) and some tags in form of key-value pairs, somewhat like columns and their values.

Counter counter =Metrics.counter(“request.orders”,  “partnerId”,    
partnerId.toString(), “state”, state);

measurement = “request.orders”, partnerId = “zomato”, state = “received”

This metric data will be pushed to the actuator endpoint, this work is done by the micrometer, we don’t need to anything for this. Let’s run our application and then see for results. After making random requests we get this.

"name": "request.orders",
"description": null,
"baseUnit": null,
"measurements": [
"statistic": "COUNT",
"value": 7
"availableTags": [
"tag": "state",
"values": [
"tag": "partnerId",
"values": [

Micrometer sends this data periodically to the actuator endpoints.

Micrometer is a dimensional-first metrics collection tool which helps you time, count, gauge your code

What is dimensional first?

Suppose we wanted to measure the metric for http requests. Earlier we had to create a hierarchical measurement like: http.request.count. But now we want to count them on the basis of the protocols ex: count of POST , GET requests. In hierarchical measurements, we had to make, http.request.count.get two such measurements.

But by using micrometer we make only one measurement and add tags [post,get] to the measurement. We can add any number of tags to a measurement i.e we add dimensions to the measurement.


Now it’s time to send this data to a time series database, which will basically store the measurements, tags and their values — for a particular timestamp, that’s why the name time-series database. These databases store the values found at a particular instance.

brew install influxdb
brew start services influx

This should install influxDB on your local machine, on which we will sync our metrics. Let’s add some configuration in file to export this data.


management.metrics.export.influx.db — is the database name

management.metrics.export.influx.step — interval at which data is exported to db. We have kept it as 1 minute. So micrometer will sync the count of requests received in last 1 minute.

management.metrics.export.influx.uri — url for influx db

Now hit ~10–20 requests to create pizza orders. Execute following commands to see the data in db

influx // connects with local dbshow databases // should list all the databases
use pizza // connect to the pizza database
select * from "request_orders"RESULT:name: request_orders
time metric_type partnerId state value
---- ----------- --------- ----- -----
1573928432652000000 counter swiggy received 0
1573928492645000000 counter swiggy received 14
1573928552649000000 counter swiggy received 0
1573928612656000000 counter swiggy received 1573928672653000000 counter swiggy received 0
1573928732654000000 counter swiggy received 0
1573928792661000000 counter swiggy received 0
1573928852672000000 counter swiggy received 11
1573928912720000000 counter swiggy received 0
1573928972678000000 counter swiggy received 0
1573929032691000000 counter swiggy received 0
1573929092667000000 counter swiggy received 0
1573929152668000000 counter swiggy received 0
1573929212675000000 counter swiggy received 0

So we can see that micrometer has successfully pushed the data in our db. Let’s display data on a dashboard using Chronograf.


Install chronograf on local system. Use the following commands

brew install telegraf
brew install chronograf
brew start services tele
brew start services chronograf

Chronograf gets the port 8888 by default. Open http://localhost:8888 in the browser.

  • Add a connection to our local influx db
  • Create a dashboard with name Pizza Delivery(5th icon from the bottom on the sidebar)
  • Let’s add some data to visualise. Click on Add Data.

One can directly write SQL type query or just select the measurements, keys to display and the query is automatically created. Click green button on the top right corner.

Awesome! We have now used our metrics to give us meaningful insights. We can analyse this data at what rate the order are getting created in received state, what is the trend of orders. Let’s add visualisation for each partner.

Added two queries in the same graph, to visualise individual partner requests.

Final dashboard

After hitting multiple requests, we get this. So we can see that we got spike form “zomato” and the number of orders too.

Using Micrometer to push the data to InfluxDB to visualise it Chronograf, we got reports for our custom metrics and we can monitor our application more efficiently. We can have several metrics depending on our requirements and have them displayed on the dashboard. I would encourage the readers to leverage the power of these three tools to ease monitoring and focus on development. Explore chronograf to know more about its features such as templates, annotations, visualisations etc.

You can try collecting information like how many “received” orders are getting “processed”.

private void processOrders() {
List<Order> orders = getReceivedOrders();
int processedOrders = 0;
orders.forEach(order -> {
bool processed = processOrder(order);
increaseCount(order.getPartnerId(), “processed”);

One can also explore PrometheusDB — alternative for influxDB. Grafana is an alternative for chronograf. You will find tons of info. related to this on the web.


Let me know if I missed something or if anything you didn’t understand. Open to suggestions. Ignore the typos ;)

Tech | Travel | TV series