The world is an imperfect place, but it’s a fact of life for those of us managing critical infrastructure. When you combine that problem with the tsunami of disparate data sources involved in monitoring that infrastructure; the situation does not get easier.
The world of choices by vendors and coders results in masking the use of intended standards. Remote monitoring of data center systems is your eyes and ears, and no matter how ready you are to respond, things go quickly awry if you miss a critical event or state. Poor handling of normalization can be your weakest link. Most DCIM solutions struggle with this.
To achieve steadfast, reliable monitoring, it is imperative to normalize your monitored data. In the chain from the device, through the protocols, into your system, and into your database – the goal is to normalize the data as quickly as possible. The fewer the steps before normalization occurs, the less chance of error. Ideally, the data is normalized at the first step in your monitoring system when it is retrieved from the device.
The OpenData platform follows this approach. The Collector component can normalize data as soon as it is returned by the base protocol. Scaling and offsets can be factored in as the raw data arrives. Ensuring you follow standards, like converting all power data to kW, lets you maintain a homogenous set of rules and alarms for handling state monitoring and reporting. With rules like alarms and reports all driven by normalized data, you avoid mistakes that bleed into your infrastructure over time as your system evolves and grows.
\We know how to do this and are happy to assist you with your project. You can reach us at email@example.com to see how we can help bring a bit of “sanity” to the data from your critical infrastructure — whether that be a captive data center, co-lo, telecom networks, or distributed assets located in colo’s or edge data centers