ブログをご覧ください: DCIMの最新情報はこちら
検索
この検索ボックスを閉じる

Rapid Data Collection at Scale

collect data dcim

DCIM Software makes your job easier. And it does this by turning large sets of device data into easy-to-digest overviews of your data center’s health and status. This data set is typically large, sometimes extremely large, and must be collected in near real-time to provide a meaningful and responsive experience for the data center operator.

The hard reality is that the larger the data center, the larger the data set – and the larger the data set, the greater the need for rapid understanding of the mountain of data. Therefore, the cornerstone of meaningful insight into your data center status is the ability to collect data at scale reliably.

What does at scale mean? In today’s data centers, that means tens of thousands to hundreds of thousands of devices and data points collected near real-time in an ongoing stream of data from physical and diverse sources. This means multiple vendors, multiple protocols, multiple sites, across multiple media, and multiple collection rates – again in near real-time to allow for rapid response to a change in status or unexpected critical events.

A robust data-center architecture leverages redundancy as the first line of defense. Thus, if a device or service fails, the system’s redundancy automatically shifts the load to maintain uptime. This architecture addresses the first minute of a critical event, but you need rapid awareness of the event to proactively understand the status and ensure the issue does not cascade.

This brings us back to data at scale. How do you ensure this stream goes unbroken even though it is flowing in the same data center that currently has a critical event? Like your data center, your DCIM solution needs a well-designed, robust architecture that leverages redundancy as the first line of defense.

Data collection should be driven by a distributed architecture, with the lowest level network transactions occurring as close to the physical equipment being monitored. The fewer hops and devices between your data collection and its target, the stronger the resiliency. The more distributed this data collection process, the lower the impact of any issue in the data center. Redundant paths for the data further strengthen your resilience. All of these together help ensure a steady stream of data.

Once you have all these benefits in place, the last piece that allows this process to operate in the modern “at scale” world is ensuring the accuracy of both your data and time stamps. Yes, time stamps. You did read our blog on how important it is to have accurate data to analyze an event stream, right? With data coming in At scale, recording the time stamp for each data value could be challenging for a single system, no matter how robust. Speed and resources can only push a process thread so far before data values become stale and reflect inactive timestamps. The solution – do this at the edge when the data is collected. If the data collection processes handle this process, you gain the benefit of distribution and the ability to collect your data at scale.

Are you looking for even more resilience in DCIM? Need to manage at scale in real-time? Then, please take a look at our OpenData DCIM solution. Then, reach out to us at sales@modius.com to let us show you how we can make managing rapid data collection at scale easy!

Share this article

Facebook
ツイッター
LinkedIn