acrofan

Industry Economy TECH GAME
Society Comfort AUTO MEDIA

Kevin Brown - From Buzzwords to Reality: Managing Edge Data Centers

Schneider Electric introduced its strategies and solutions for edge infrastructure at ‘Edge Press Conference 2019’ on September 19 at the Marina Bay Sands Hotel in Singapore. At the event, Schneider Electric introduced 'EcoStruxure™' platform and various solutions for the construction and operation of efficient edge infrastructure under the theme of 'Life at the Edge'. With the proliferation of IoT utilization, the “edge” infrastructure is becoming increasingly important in the overall IT infrastructure, and by 2025, 75% of data generated by companies around the world is expected to be created and processed at the edge. Hence, in the construction and operation of the edge infrastructure, Schneider Electric's event with the theme of 'Life at the Edge' suggested new trends in digital transformation and business profitability improvement using cloud-based software, AI and micro data center infrastructure, and introduced Schneider Electric's strategies, solutions and collaborations to realize these opportunities. Kevin Brown, Senior Vice President of Innovation and CTO in Secure Power Division, introduced some practical considerations for implementing an edge data center. While building infrastructure at the edge is more important than our expectations, it's hard to put resources in place to increase availability. And the ways to ensure availability must also be approached differently from the traditional data centers. Schneider Electric has pointed out the cloudification of the management system and AI technology to leverage the vast amount of data as the ways to effectively maximize the resiliency of the edge. ▲ Kevin Brown, Senior Vice President of Innovation and CTO in Secure Power Division ▲ Given the availability of the entire infrastructure, the availability of local edges should also be higher than expected. Kevin Brown introduced that there are many different views on edge infrastructure and hybrid edge architectures, but in brief, they fall into three categories; central large data center, ‘regional edge’ by region, and ‘local edge’ that creates and consumes data closest to the user. Even in this architecture, the model will vary depending on how much computing power the local edge has and which device the local edge is connected to and operating with. But he added that the local edge is where the device is connected first. The first thing to think about when implementing local edge availability and resiliency is that the "expectations" people have for IT has changed. If a service is stopped due to some kind of failure, it may not have a big impact on someone, but it may come as a big disaster for someone. And the availability aspects of traditional data centers are different in stages, ranging from Tier 1 with 99.67% availability to Tier 4 with 99.995% availability. The percentages may seem as small as a change in decimal units. However, in the aspect of fault-tolerant, Tier 1 can be achieved even if service stops for 28.8 hours, while Tier 4 only allows 25 minutes per year. Typically, the data center targets Tier 3 with 99.98% availability, with an annual downtime of 1.6 hours. Efforts have also been constantly made to drive this up, but it was pointed out that only focusing on downtime from an IT perspective may distort the results regardless of the user experience. In addition, considering the combination of Tier 3 central data centers and Tier 1 local edges, the availability level will drop to 99.65%, resulting in 30.7 hours of downtime per year. Indeed, Kevin stressed that the edge should also be built as a mission-critical data center beyond expectations. The things needed to implement an edge site with this type of mission-critical data center beyond expectations might include facility security, redundancy configuration, independent cooling, monitoring, management, local support staff, etc. In response to these requirements, the integrated ecosystem, appropriate management tools, and the use of analytics and AI were mentioned to improve edge resilience. This enables monitoring to the physical environment, implements access control to equipment, remotely configures settings, and proactively responds before problems occur. The use of AI can also reduce the burden on management resources. ▲ In the edge environment, the 'integrated ecosystem' aspect by multiple players becomes more special. ▲ A cloud-based implementation is effective for the management of distributed edge environments. The cores of the 'integrated ecosystem' through active cooperation within the ecosystem are standardization, robustness, and simplicity, which can overcome the 'distributed multiple installation environment' and 'insufficient resident management resource', which are the characteristics of edge environment. As for the edge as well, in the user's environment, when a failure occurs, all surrounding consumption and distribution environment would be stopped. Therefore, management and security should be considered in the edge construction. The edge environment should be able to be monitored and managed from anywhere. And as the system is built and distributed through partners, it should be able to quickly respond to various user-specific deployment cases or failure situations by considering the training aspects of manpower. On the other hand, the training aspect of the manpower is a very important part of the system construction, and the importance of the integrated ecosystem is also emphasized here. For example, if Schneider Electric's customers are considering the provision of services as 'managed service providers' and the customers are in locations that are unable to be responded by directly sending manpower, it may be necessary to entrust the information to a local service provider. And depending on the customer's application, a configuration of deploying computing power on the local edge might be also possible. Here, Schneider Electric added that it is working closely with a variety of partners, including HPE, Dell EMC, and Cisco, to address these diverse needs. As for the management tools for edge infrastructures deployed in physically remote locations, traditional management tools were not suitable for the edge in terms of access control, alarm issues, and management cycles. Hence, Schneider Electric proposed a cloud-based edge management environment to overcome this problem. If the management environment is implemented on a cloud basis, you can access and monitor remote devices anytime, anywhere, and you don't have to worry about device scalability. In addition, Kevin explained that it is advantageous in terms of flexible payment model, software update, and utilization of new technologies such as AI. ▲ The use of cloud and AI can provide more accurate insights. In the management of edge infrastructure, AI technology is expected to save human effort and enable efficient data-based management. However, some preparation is required to properly use it. The primary preparation includes secure, scalable, and robust cloud architecture, and at the same time, a "data lake" with large amounts of normalized data is needed. Besides, Kevin Brown stated that the experts who have a deep understanding of the system's activities and an approach to machine learning algorithm expertise would be required. In fact, in deriving insights from data, it is important to clarify what data to analyze what problems and to collect and refine the data accordingly. Simply pouring data into an AI system does not yield results, and most of the data analysis is about obtaining and refining data that can be analyzed. Of course, it is not easy for customers to exactly identify and handle these points, and the problems that customers face becoming more complex, so Kevin pointed out that the difficulty keeps increasing. Schneider Electric introduced 'UPS Score' as a way to simplify the data analysis and insight aspects of UPS management. This UPS Score analyzes the information of hundreds of UPSs installed in various infrastructures based on algorithms, providing a clear picture of what your UPS is up to now and taking a series of steps before a failure occurs. The scoring criteria are based on the device's service life, battery life, temperature, the balance of phases, alarm data and sensor data, etc. This provides a more intuitive understanding of the current state and allows us to respond proactively before serious problems arise.

Suspension of Two Sanctions Against Samsung BioLogics Is Confirmed, and It Accelerates Global Business

The Securities and Futures Commission (SFC) has confirmed suspension of both first and second sanctions against Samsung BioLogics. The Supreme Court's third division announced on November 11 that it decided a ‘without hearing’ dismissal to the SFC’s re-appeal case to cancel the court’s suspension decision on sanctions against Samsung BioLogics. ‘Without hearing’ dismissal means that the Supreme Court considers that it is not subject to appeal or re-appeal case and decides without making a separate hearing. In 2017, the SFC announced that Samsung BioLogics deliberately committed accounting fraud in the process of changing accounting standards of Samsung Bioepis from subsidiary to affiliated company at the end of 2015. After the announcement, in July 2018, Samsung BioLogics was advised the first sanction including dismissal of its representatives and executives and auditor designation for three years. In November, the company faced the second sanction to dismiss a representative, rewrite financial statements, and pay a fine of 8 billion KRW. As sanctions increase, Samsung BioLogics requested for suspension of execution for the first and second sanction, and the lower instance agreed with Samsung BioLogics. The SFC re-appealed to the Supreme Court, but after the suspension of the second sanction on September 6, it concluded to suspend the first sanction as well. On the other hand, as the suspension of execution was confirmed, Samsung BioLogics was able to ease the congestion and accelerate the development of global business. Hence, in the Korean stock market, it has jumped 3.99 percent in a day on the 16th.

Samsung SDS Cloud is Internationally Certified as Global TOP 10

▲ Samsung SDS Chuncheon Data Center (Photo courtesy of Samsung SDS) Samsung SDS (CEO Hong Won-pyo) held Cloud Media Day at Chuncheon Data Center on September 20 and introduced Samsung SDS’s cloud platform and technology that can easily convert and operate customer's IT infrastructure and work system to cloud. Samsung SDS entered the cloud external business in earnest last year based on its cloud switching and operation experience with affiliates, and currently operates 210,000 virtual servers including servers, storage, and networks. In recognition of this competence, Samsung SDS became the only Korean company to be named as the global top 10 provider of IT infrastructure operation services* selected by Gartner. * Source : Gartner, Market Share Analysis: Infrastructure Managed Services, Worldwide, 2018, Colleen Graham et al., 22 August 2019. ▲ CEO Hong Won-pyo of Samsung SDS (Photo courtesy of Samsung SDS) CEO Hong Won-pyo of Samsung SDS stated, “The cloud business is entering the second phase after passing the first phase. The first step is to transform the IT infrastructure to the cloud, while the second step is to integrate the core solutions and services into the cloud. It seems that many companies are now at the second stage after the first stage. In 2018, sales amounted to KRW 10 trillion and 14% for the external business. Samsung SDS is challenging to increase sales in 2019 and to increase the proportion of external business to 19%.” Yoon Shim, Samsung SDS Vice President and Cloud Business Manager, emphasized, “We will contribute to enhancing our customers' business competitiveness by providing services that can be optimized in the cloud along with cloud IT infrastructure.” Samsung SDS started the cloud business in 2010, and as of 2019, operates five domestic data centers, ten overseas locations, and 210,000 units of equipment. There are more than 200 operating cases of platform-based cloud application system (PaaS), and financial and public clouds for Korea are also in operation. To build a cloud infrastructure, the company is building about 10,000 lines of code and using it to complete development and setting, and to automate it as well. ▲ Yoon Shim, Vice President of Samsung SDS (Photo courtesy of Samsung SDS) Recently, enterprise customers want to go beyond converting their IT infrastructure to the cloud and operate core business systems and business platforms in the cloud. To this end, Samsung SDS proposed three solutions that enable efficient use of various clouds, easy and convenient development environment, and rapid spread of global services. First, Samsung SDS introduced Samsung SDS Hybrid Cloud Platform, which manages private and public clouds at once, easily supports data movement between clouds, and manages faults through server resource monitoring. Secondly, Samsung SDS introduced Samsung SDS PaaS (Platform as a Service), which applies cloud-native technology to enable corporate customers to quickly and easily develop and operate business systems and to easily modify and deploy applications in the cloud environment. By using Samsung SDS PaaS, which is a representative cloud-native technology with 1) Container, 2) DevOps, which simultaneously develops and operates, and 3) Modular development (MSA: Micro Service Architecture) that changes only necessary modules and distributes them, customers can shorten the time spent for establishing development environment from eight days to one day and application deployment from two weeks to one day. Finally, corporate customers who want to rapidly spread their services in the global market can reduce infrastructure construction and application installation and deployment time from 11 weeks to 3 weeks by applying the Samsung SDS Site Reliability Engineering (SRE) method. ▲ Samsung SDS Chuncheon Data Center Server Room (Photo courtesy of Samsung SDS) Meanwhile, Samsung SDS introduced the recently opened Chuncheon Data Center based on Software Defined Data Center (SDDC). Samsung SDS announced that it will be able to rapidly expand resources by integrating and operating Chuncheon, Sangam and Suwon data center server resources by applying SDDC technology. In addition, Chuncheon Data Center is equipped with eco-friendly and cutting-edge facilities that actively utilize renewable energy such as solar light to maximize energy efficiency, and improve power efficiency by using natural wind. Power Usage Effectiveness (PUE) is recording 1.2, and the Dongtan data center dedicated to High Performance Computing (HPC), built on the know-how gained here, is being designed for PUE 1.1.