Monitoring Hyperledger Fabric components using Prometheus
As more and more Hyperledger Fabric-based blockchain network came online, the need to perform monitoring and understand the performance of Hyperledger Fabric components becomes a must. In the Operation Service documentation of Hyperledger Fabric introduces Prometheus interface to monitor the metrics. For today article, I will more towards showcase how this can be done in Prometheus server that deployed as part of IBM Cloud Private, which is the default deployment model of IBM Blockchain Platform for Multicloud.
Before I start, I would like to thank my teammates — Aldred and Zhi Men. Their asset shared in github and medium helped me to make this configuration easy.
Aldred’s article focus on how to register for a new Enroll ID and extract the necessary key and cert to connect to Peer — https://github.com/aldredb/sysdig-ibpv2-integration
ZhiMen’s article focus on how to configure a new job in Prometheus’s ConfigMap — https://medium.com/@zhimin.wen/custom-prometheus-metrics-for-apps-running-in-kubernetes-498d69ada7aa
Basically I combine the steps from both to make this integration happen.
- Build a Blockchain network
You can follow the step in IBM Blockchain Platform to create CA, MSP, and Peer, Ordering Service to create your blockchain network — https://cloud.ibm.com/docs/services/blockchain?topic=blockchain-ibp-console-build-network
2. Register Enroll ID
Once new Enroll ID is registered. You can follow the steps in Aldred’s github to get the private key and cert. Subsequently test out the connection to Peer to ensure we can retrieve the metrics. Instead of using .key and .pem file, we will need it in the form of base64 as we store the key in Kubernetes Secret.
3. Modify Kubernetes Secret
Instead of modify the deployment file of Prometheus, to make thing easier. Of course please perform cloudctl to login to IBM Cloud Private before you execute any kubectl command. I re-used of the existing Kubernetes Secret to store our key, in this case etct-secret. I exported the secret to store the key.
kubectl get secret etcd-secret -n kube-system -o yaml > etcd-secret.yaml
Using Microsoft Visual Studio to modify etcd-secret.yaml
Apply back Kubernetes Secret using command below:
kubectl -n kube-system apply -f etcd-secret.yaml
4. Define a new job in Prometheus configuration file.
Using following command to extract the existing Prometheus configuration file from IBM Cloud Private.
kubectl -n kube-system get cm monitoring-prometheus -o jsonpath="{ .data.prometheus\\.yml }" > prom-peer.yaml
Using Microsoft Visual Studio to append a new job at the end of file as define below:
- job_name: 'hyperledger_metrics'
scrape_interval: 10s
scheme: https
static_configs:
- targets:
- '<<IP_ADDRESS/HOSTNAME>>:<<PORT_NUMBER>>'
metrics_path: /metrics
tls_config:
cert_file: /etc/etcd/admin.pem
key_file: /etc/etcd/admin.key
insecure_skip_verify: true
The indentation is important as yaml file is sensitive to it. Replace the IP Address/Hostname of Peer and Port Number used. Scrape, scheme and other configuration, you can refer to Prometheus website.
Once the changes is done, update Prometheus ConfigMap by deleting it and create as new ConfigMap.
kubectl -n kube-system delete cm monitoring-prometheus
kubectl -n kube-system create cm monitoring-prometheus --from-file=prometheus.yml=prom-peer.yaml
5. Verify the configuration
This can be done by accessing to Prometheus console > Status > Target. You should see the job have status of “Up”. If you got an error message of HTTP 401, it means the private key and/or cert to access the Peer in incorrect.
6. Configure dashboard at Grafana
IBM Cloud Private have pre-installed Grafana dashboard upon installation and pre-configure to integrate with Promethues. Create a new dashboard, at Metrics. Select datasource of “Prometheus”, in query you can use the metrics defined in Hyperledger Fabric Metrics.
You can define the visualisation that you want in Grafana.
Once again to thank my teammates Aldred and ZhiMin for making this happen. Do follow and support their posts!