- March 21, 2017
- By Mike Wise
We recently spoke on a panel called “Artificial Intelligence Powered Analytics in Production Operations” (you can watch the video replay here). Perspica’s CTO, JF Huard, was joined by Justin Fitzhugh, VP of Technical Operations for Instart Logic, and Manoj Choudhary, CTO of Loggly. One of the recurring themes of the panel was that manual thresholds don’t work in today’s big data world. Below is a summary of that thread of the conversation.
Traditional monitoring practices require operators to set static thresholds manually. In today’s “hyper-scale” environments, setting manual thresholds has become obsolete, with many operations teams looking to artificial intelligence to automate the process.
Problem #1: Too many metrics and too much data
Big data technologies like AWS and open source software have made it easy to launch and scale new applications quickly. It’s not unusual for an application to generate tens or hundreds of thousands of metrics and half a million data points per second. At this volume, setting manual thresholds simply isn’t possible.