With the proliferation of IoT utilization, the “edge” infrastructure is becoming increasingly important in the overall IT infrastructure, and by 2025, 75% of data generated by companies around the world is expected to be created and processed at the edge. Hence, in the construction and operation of the edge infrastructure, Schneider Electric's event with the theme of 'Life at the Edge' suggested new trends in digital transformation and business profitability improvement using cloud-based software, AI and micro data center infrastructure, and introduced Schneider Electric's strategies, solutions and collaborations to realize these opportunities.
Kevin Brown, Senior Vice President of Innovation and CTO in Secure Power Division, introduced some practical considerations for implementing an edge data center. While building infrastructure at the edge is more important than our expectations, it's hard to put resources in place to increase availability. And the ways to ensure availability must also be approached differently from the traditional data centers. Schneider Electric has pointed out the cloudification of the management system and AI technology to leverage the vast amount of data as the ways to effectively maximize the resiliency of the edge.
|▲ Kevin Brown, Senior Vice President of Innovation and CTO in Secure Power Division|
|▲ Given the availability of the entire infrastructure, the availability of local edges should also be higher than expected.|
Kevin Brown introduced that there are many different views on edge infrastructure and hybrid edge architectures, but in brief, they fall into three categories; central large data center, ‘regional edge’ by region, and ‘local edge’ that creates and consumes data closest to the user. Even in this architecture, the model will vary depending on how much computing power the local edge has and which device the local edge is connected to and operating with. But he added that the local edge is where the device is connected first.
The first thing to think about when implementing local edge availability and resiliency is that the "expectations" people have for IT has changed. If a service is stopped due to some kind of failure, it may not have a big impact on someone, but it may come as a big disaster for someone. And the availability aspects of traditional data centers are different in stages, ranging from Tier 1 with 99.67% availability to Tier 4 with 99.995% availability. The percentages may seem as small as a change in decimal units. However, in the aspect of fault-tolerant, Tier 1 can be achieved even if service stops for 28.8 hours, while Tier 4 only allows 25 minutes per year.
Typically, the data center targets Tier 3 with 99.98% availability, with an annual downtime of 1.6 hours. Efforts have also been constantly made to drive this up, but it was pointed out that only focusing on downtime from an IT perspective may distort the results regardless of the user experience. In addition, considering the combination of Tier 3 central data centers and Tier 1 local edges, the availability level will drop to 99.65%, resulting in 30.7 hours of downtime per year. Indeed, Kevin stressed that the edge should also be built as a mission-critical data center beyond expectations.
The things needed to implement an edge site with this type of mission-critical data center beyond expectations might include facility security, redundancy configuration, independent cooling, monitoring, management, local support staff, etc. In response to these requirements, the integrated ecosystem, appropriate management tools, and the use of analytics and AI were mentioned to improve edge resilience. This enables monitoring to the physical environment, implements access control to equipment, remotely configures settings, and proactively responds before problems occur. The use of AI can also reduce the burden on management resources.
|▲ In the edge environment, the 'integrated ecosystem' aspect by multiple players becomes more special.|
|▲ A cloud-based implementation is effective for the management of distributed edge environments.|
The cores of the 'integrated ecosystem' through active cooperation within the ecosystem are standardization, robustness, and simplicity, which can overcome the 'distributed multiple installation environment' and 'insufficient resident management resource', which are the characteristics of edge environment. As for the edge as well, in the user's environment, when a failure occurs, all surrounding consumption and distribution environment would be stopped. Therefore, management and security should be considered in the edge construction. The edge environment should be able to be monitored and managed from anywhere. And as the system is built and distributed through partners, it should be able to quickly respond to various user-specific deployment cases or failure situations by considering the training aspects of manpower.
On the other hand, the training aspect of the manpower is a very important part of the system construction, and the importance of the integrated ecosystem is also emphasized here. For example, if Schneider Electric's customers are considering the provision of services as 'managed service providers' and the customers are in locations that are unable to be responded by directly sending manpower, it may be necessary to entrust the information to a local service provider. And depending on the customer's application, a configuration of deploying computing power on the local edge might be also possible. Here, Schneider Electric added that it is working closely with a variety of partners, including HPE, Dell EMC, and Cisco, to address these diverse needs.
As for the management tools for edge infrastructures deployed in physically remote locations, traditional management tools were not suitable for the edge in terms of access control, alarm issues, and management cycles. Hence, Schneider Electric proposed a cloud-based edge management environment to overcome this problem. If the management environment is implemented on a cloud basis, you can access and monitor remote devices anytime, anywhere, and you don't have to worry about device scalability. In addition, Kevin explained that it is advantageous in terms of flexible payment model, software update, and utilization of new technologies such as AI.
|▲ The use of cloud and AI can provide more accurate insights.|
In the management of edge infrastructure, AI technology is expected to save human effort and enable efficient data-based management. However, some preparation is required to properly use it. The primary preparation includes secure, scalable, and robust cloud architecture, and at the same time, a "data lake" with large amounts of normalized data is needed. Besides, Kevin Brown stated that the experts who have a deep understanding of the system's activities and an approach to machine learning algorithm expertise would be required.
In fact, in deriving insights from data, it is important to clarify what data to analyze what problems and to collect and refine the data accordingly. Simply pouring data into an AI system does not yield results, and most of the data analysis is about obtaining and refining data that can be analyzed. Of course, it is not easy for customers to exactly identify and handle these points, and the problems that customers face becoming more complex, so Kevin pointed out that the difficulty keeps increasing.
Schneider Electric introduced 'UPS Score' as a way to simplify the data analysis and insight aspects of UPS management. This UPS Score analyzes the information of hundreds of UPSs installed in various infrastructures based on algorithms, providing a clear picture of what your UPS is up to now and taking a series of steps before a failure occurs. The scoring criteria are based on the device's service life, battery life, temperature, the balance of phases, alarm data and sensor data, etc. This provides a more intuitive understanding of the current state and allows us to respond proactively before serious problems arise.