User login

Search Projects

Project Members


The aim of this project is to develop a system whereby network measurements from a variety of sources can be used to detect and report on events occurring on the network in a timely and useful fashion. The project can be broken down into four major components:

Measurement: The development and evaluation of software to collect the network measurements. Some software used will be pre-existing, e.g. SmokePing, but most of the collection will use our own software, such as AMP, libprotoident and maji. This component is mostly complete.

Collection: The collection, storage and conversion to a standardised format of the network measurements. Measurements will come from multiple locations within or around the network so we will need a system for receiving measurements from monitor hosts. Raw measurement values will need to be stored and allow for querying, particularly for later presentation. Finally, each measurement technology is likely to use a different output format so will need to be converted to a standard format that is suitable for the next component.

Eventing: Analysis of the measurements to determine whether network events have occurred. Because we are using multiple measurement sources, this component will need to aggregate events that are detected by multiple sources into a single event. This component also covers alerting, i.e. deciding how serious an event is and alerting network operators appropriately.

Presentation: Allowing network operators to inspect the measurements being reported for their network and see the context of the events that they are being alerted on. The general plan here is for web-based zoomable graphs with a flexible querying system.




Due to the impending deadline for MSI funding proposals, last week was quite a mixed bag of tasks.

Developed another event detector that tries to detect obvious spikes in a relatively constant time series. The likelihood that a spike will be treated as an event is inversely correlated with the amount of noise in the time series, i.e. a spike in noisy data won't register as an event but a smaller spike following a long period of consistency would. Also started looking at decomposing time series with R again.

Wrote a lab exercise for 312 on configuring a DNS server. Spent a couple of hours in R block during the designated 312 lab time to help out students, although they were mostly working on previous labs (or wasting time looking at meme pictures).

Went over the methodology sections of both MSI proposals with Jamie and Brendon. Rewrote the methodologies to better suit the requirements, i.e. more emphasis on the research tasks that we will be carrying out.




Short week - on holiday until Thursday.

Caught up with various support requests once I got back. Had a long chat with Andreas about time series' and how we might be able to get better results when analysing the data produced by AMP and libprotoident.

Concluded that we need to start by making sure we can deal with the more obvious cases properly - in particular, time series where the reported value is mostly constant which we commonly get from AMP. The detectors we have at the moment are based on standard deviation, which doesn't work well when the stddev approaches nil. Developed a detector that works much better in those cases and also started adding code that will use an appropriate detector depending on the type of time series we have observed.




Started looking at Andreas' code in more detail by throwing a few different time series at it and seeing what anomalies it detects. Was not entirely happy with the results and spent a fair bit of time delving much deeper into the code than I would have liked to try and figure out what was going on.

This also involved spending a bit of time with R and its time series decomposition functions to see if that would shed any light on what we should be finding in the time series data.

Spent Thursday and Friday at the cricket.




Released libtrace 3.0.14 - mostly just a bug fix release. I also separated the I/O stuff into a separate library so that it can be used outside of libtrace.

Took a quick look at maji again to see if we can use it as part of the MSI project. Fixed up some bugs that became apparent when exporting lots of flow records. Also decided that maji would work a lot better if it underwent a major design change, but resisted the temptation to do so for now.

Secured the RT exporter connected to the live capture point so that only WAND machines can connect to it - someone from a lightwire address had connected to it and sent something invalid which broke the whole wdcap process. The RT exporter also now handles invalid client responses better :)

Started looking at Andreas' time series anomaly detection code. The existing system only really works with offline data, so the first goal is to get it running against a "live" input source.




Added some new http2 test destinations to the main AMP test schedule.
Started running them on Massey in response to a query about web
performance and in doing so found and fixed a few display bugs. Had
another look at using the logs from the test to generate waterfall graphs
of http connections (using and
found a few cases where libcurl might not be behaving as expected when
resolving addresses.

Spent some time talking with Shane and planning out how we can fit
everything together in a useful fashion for the MSI project.

Started investigating the best way to aggregate measurements from the last
few years of the KAREN weathermap to look at the growth of the network.

Watched some of the streamed presentations by Josh on Openflow, looking
quite interesting.