Online change point detection with Netdata
This collector uses the Python changefinder library to perform online changepoint detection on your Netdata charts and/or dimensions.
Instead of this collector just collecting data, it also does some computation on the data it collects to return a changepoint score for each chart or dimension you configure it to work on. This is an online machine learning algorithim so there is no batch step to train the model, instead it evolves over time as more data arrives. That makes this particualr algorithim quite cheap to compute at each step of data collection (see the notes section below for more details) and it should scale fairly well to work on lots of charts or hosts (if running on a parent node for example).
As this is a somewhat unique collector and involves often subjective concepts like changepoints and anomalies, we would love to hear any feedback on it from the community. Please let us know on the community forum or drop us a note at analytics-ml-team@netdata.cloud for any and all feedback, both positive and negative. This sort of feedback is priceless to help us make complex features more useful.
#
ChartsTwo charts are available:
changefinder.scores
)#
ChangeFinder Scores (This chart shows the percentile of the score that is output from the ChangeFinder library (it is turned off by default
but available with show_scores: true
).
A high observed score is more likley to be a valid changepoint worth exploring, even more so when multiple charts or dimensions have high changepoint scores at the same time or very close together.
changefinder.flags
)#
ChangeFinder Flags (This chart shows 1
or 0
if the latest score has a percentile value that exceeds the cf_threshold
threshold. By
default, any scores that are in the 99th or above percentile will raise a flag on this chart.
The raw changefinder score itself can be a little noisey and so limiting ourselves to just periods where it surpasses the 99th percentile can help manage the "signal to noise ratio" better.
The cf_threshold
paramater might be one you want to play around with to tune things specifically for the workloads on
your node and the specific charts you want to monitor. For example, maybe the 95th percentile might work better for you
than the 99th percentile.
Below is an example of the chart produced by this collector. The first 3/4 of the period looks normal in that we see a few individual changes being picked up somewhat randomly over time. But then at around 14:59 towards the end of the chart we see two periods with 'spikes' of multiple changes for a small period of time. This is the sort of pattern that might be a sign something on the system that has changed sufficiently enough to merit some investigation.
#
Requirements- This collector will only work with Python 3 and requires the packages below be installed.
Note: if you need to tell Netdata to use Python 3 then you can pass the below command in the python plugin section
of your netdata.conf
file.
#
ConfigurationInstall the Python requirements above, enable the collector and restart Netdata.
The configuration for the changefinder collector defines how it will behave on your system and might take some experimentation with over time to set it optimally for your node. Out of the box, the config comes with some sane defaults to get you started that try to balance the flexibility and power of the ML models with the goal of being as cheap as possible in term of cost on the node resources.
Note: If you are unsure about any of the below configuration options then it's best to just ignore all this and
leave the changefinder.conf
file alone to begin with. Then you can return to it later if you would like to tune things
a bit more once the collector is running for a while and you have a feeling for its performance on your node.
Edit the python.d/changefinder.conf
configuration file using edit-config
from the your
agent's config directory, which is usually at /etc/netdata
.
The default configuration should look something like this. Here you can see each parameter (with sane defaults) and some information about each one and what it does.
#
TroubleshootingTo see any relevant log messages you can use a command like below.
If you would like to log in as netdata
user and run the collector in debug mode to see more detail.
#
Notes- It may take an hour or two (depending on your choice of
n_score_samples
) for the collector to 'settle' into it's typical behaviour in terms of the trained models and scores you will see in the normal running of your node. Mainly this is because it can take a while to build up a proper distribution of previous scores in over to convert the raw score returned by the ChangeFinder algorithim into a percentile based on the most recentn_score_samples
that have already been produced. So when you first turn the collector on, it will have a lot of flags in the beginning and then should 'settle down' once it has built up enough history. This is a typical characteristic of online machine learning approaches which need some initial window of time before they can be useful. - As this collector does most of the work in Python itself, you may want to try it out first on a test or development system to get a sense of its performance characteristics on a node similar to where you would like to use it.
- On a development n1-standard-2 (2 vCPUs, 7.5 GB memory) vm running Ubuntu 18.04 LTS and not doing any work some of the
typical performance characteristics we saw from running this collector (with defaults) were:
- A runtime (
netdata.runtime_changefinder
) of ~30ms. - Typically ~1% additional cpu usage.
- About ~85mb of ram (
apps.mem
) being continually used by thepython.d.plugin
under default configuration.
- A runtime (
#
Useful links and further reading- PyPi changefinder reference page.
- GitHub repo for the changefinder library.
- Relevant academic papers:
- Yamanishi K, Takeuchi J. A unifying framework for detecting outliers and change points from nonstationary time series data. 8th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD02. 2002: 676. (pdf)
- Kawahara Y, Sugiyama M. Sequential Change-Point Detection Based on Direct Density-Ratio Estimation. SIAM International Conference on Data Mining. 2009: 389–400. (pdf)
- Liu S, Yamada M, Collier N, Sugiyama M. Change-point detection in time-series data by relative density-ratio estimation. Neural Networks. Jul.2013 43:72–83. [PubMed: 23500502] (pdf)
- T. Iwata, K. Nakamura, Y. Tokusashi, and H. Matsutani, “Accelerating Online Change-Point Detection Algorithm using 10 GbE FPGA NIC,” Proc. International European Conference on Parallel and Distributed Computing (Euro-Par’18) Workshops, vol.11339, pp.506–517, Aug. 2018 (pdf)
- The ruptures python package is also a good place to learn more about changepoint detection (mostly offline as opposed to online but deals with similar concepts).
- A nice blog post showing some of the other options and libraries for changepoint detection in Python.
- Bayesian changepoint detection library - we may explore implementing a collector for this or integrating this approach into this collector at a future date if there is interest and it proves computationaly feasible.
- You might also find the Netdata anomalies collector interesting.
- Anomaly Detection wikipedia page.
- Anomaly Detection YouTube playlist maintained by andrewm4894 from Netdata.
- awesome-TS-anomaly-detection Github list of useful tools, libraries and resources.
- Mendeley public group with some interesting anomaly detection papers we have been reading.
- Good blog post from Anodot on time series anomaly detection. Anodot also have some great whitepapers in this space too that some may find useful.
- Novelty and outlier detection in the scikit-learn documentation.