Industry Economy TECH GAME
Society Comfort AUTO MEDIA

Opening Ceremony of '2019 Chuncheon Mak-guksu&Dak-galbi Festival'

'2019 Chuncheon Mak-guksu(Buckwheat Noodle)&Dak-galbi(Spicy Stir-fried Chicken) Festival' hosted by Chuncheon City was started from June 11th. The major officials including Chuncheon City governments, members, and parliament members attended to officially announce the opening at 7 PM. At the opening ceremony, Hong-ja, Ji-won, Suk-haeng from the popular competition program 'Miss Trot', as well as the winner of the program, Song Ga-in, gave performances. During the festival, a variety of programs such as orchestra, fireworks shows, and fan signing event and concerts of the actor Jung Jun-ho, who is working as a public ambassador for Chuncheon City, are held. At this time, nighttime performances will be closely monitored to reduce the inconvenience of residents around the festival grounds. ▲ The opening ceremony of '2019 Chuncheon Mak-guksu&Dak-galbi Festival', which was prepared with an organizing committee system and is rising as Chuncheon's best festival, was held at 7 PM on June 11th. ▲ Hite Jinro participated as a main sponsor, and the order of revealing the sponsorships was preceded at the beginning of the event. ▲ The officials including the mayor, city assembly president, national assembly members, and representatives from organizing committee attended the event to celebrate the successful opening. ▲ With the opening ceremony, the six-day festival began in earnest.

Computex 2019 International Press Conference & AMD’s Lisa Su Keynote

COMPUTEX 2019, co-hosted by Taiwan External Trade Development Council (TAITRA) and Taipei Computer Association (TCA), will be held at Taipei Nangang Exhibition Center (TaiNEX), International Convention Center (TICC), and World Trade Center (TWTC) from May 28 to June 1. It will present the latest technology trends with five key themes: Artificial Intelligence (AI) and Internet of Objects (IoT), 5G, Blockchain, Innovation and Startup, and Gaming and Extended Reality (XR). Various events including the exhibitions such as SmarTEX and InnoVEX and the forums such as Computex Forum and 5G Summit will be prepared. On 27th, the day before the opening of Computex 2019, the ‘International Press Conference & CEO Keynote’ was held. At the CEO Keynote, which was first added from this year, AMD President and CEO Dr. Lisa Su attended and introduced the new products: the 2nd generation AMD EPYC datacenter processor ‘ROME’, the first lineup of Radeon RX 5000-series Navi graphics cards ‘RX5700’, and the 3rd generation AMD Ryzen desktop processors family including 12-core Ryzen 9 3900X processor. At the International Press Conference, Walter Yeh, President and CEO of TAITRA, introduced about COMPUTEX 2019. “Computex targets to create a synergy of global and local resources to provide a comprehensive platform for technical exchange and collaboration, innovative thinking, and resource integration. This year, in addition to expanding the scope of InnoVEX, TAITRA will contribute to innovation by forging a stronger connection between Taiwan’s industry and global resources. With its global partners in the ICT industry, Computex will transform the world,” said Walter Yeh. ▲ Computex 2019 International Press Conference and CEO Keynote was held. ▲ President and CEO of TAITRA, Walter Yeh ▲ Computex 2019 has increased about 10% from last year, and it presents innovative standard of ICT every year. Walter Yeh first introduced about the events and exhibitions of ‘Computex 2019’. A total of 1,685 exhibitors showcase their technologies in 5,508 booths, which is a near 10% growth from last year. On the opening day, Intel Senior Vice President and General Manager of the Client Computing Group Gregory M. Bryant will discover how the company is transforming intelligent computing for data-centric world. On the second day, the Microsoft Keynote Forum will offer a look into new intelligent cloud and intelligent edge advancements by Microsoft Corporate Vice president Nick Parker. In addition, the much anticipated Computex Forum and InnoVEX Forum will respectively focus on “Pervasive Intelligence” and “Connecting Global Startups with Taiwan’s Advantage”. The Taipei 5G Summit on 30th will give an in-depth perspective on the latest technology and industry trends. Also, to help its international partners expand business opportunities, the summit will lead discussions in 5G market development opportunities, technology, and innovative applications. As for the exhibition, a Cyber Security & Video Surveillance exhibition area is newly created to display the latest computer vision technology and market opportunities. InnoVEX, with 20% more exhibitors this year, is relocated to TWTC Hall 1 to provide more room for displays, pitches, and matchmaking. Also for the first time, a charity esports event, ZOTAC CUP Fight for Charity LOL Tournament, will be held at Computex. ▲ AMD President and CEO Dr. Lisa Su ▲ More than twice the performance difference was shown at the first public demo of the 2nd EPYC Processor ‘ROME’. Next, at the CEO Keynote, Dr. Lisa Su presented and demonstrated a variety of 7nm computing and graphics products. Through this new product, AMD will deliver new levels of performance, features and experience to PC gamers, enthusiasts and content creators. The first product to be released is the 2nd generation AMD EPYC datacenter processor ‘ROME’. Until now, since 2017 when EPYC processor was launched, more than 60 EPYC platforms have been produced on the strength of the features adequate to datacenters and cost-effectiveness. It is also emphasized that more than 50 EPYC processor-based cloud instances were produced to create a huge cloud environment in the datacenter business. Dr. Lisa Su introduced ‘Frontier’, an exascale supercomputer featuring AMD EPYC CPU and Radeon Instinct GPU, which is expected to be the world's fastest supercomputer with speeds of up to 1.5 exaFLOPS. At the event, a public competition demonstration of the 2nd generation AMD EPYC server platform was conducted, and the 2nd generation AMD EPYC processor-based prototype server performed more than twice as well as the competitor. It is expected to deliver more than twice the performance per socket and four times more floating point per socket compared to the previous generation. It will be available in the third quarter of this year. ▲ Gaming architecture RDNA is based on a compute unit design to increase efficiency, provide pleasant gaming performance, low latency and low power. ▲ Detailed information on Radeon RX 5700 can be found at AMD E3 live streaming on June 10 at 3PM PDT. Moreover, AMD unveiled a new gaming architecture RDNA for future PC gaming, consoles and the cloud. Based on the new compute unit design, RDNA will deliver improved performance, power and memory efficiency compared to the previous generation of Graphics Core Next (GCN). It provides 1.25 times better performance per clock to GCN and up to 1.5 times better performance per watt, giving gamers the benefits of more pleasant gaming performance, lower latency and lower power. This 7nm RDNA is a basis of ‘Navi’ chipset, and AMD’s new Radeon RX 5000 GPUs feature ‘Navi’. At the keynote, Dr. Lisa Su introduced RX 5700 series, which is the first lineup of RX 5000. Through the game ‘Strange Brigade’, the demonstration of Radeon RX 5700 series GPU that supports high-speed GDDR6 memory and a PCIe 4.0 interface showed that it has much higher performance than the competitive product. The AMD Radeon RX 5700 Series graphics card will be available in July 2019, and its detailed information will be revealed on AMD E3 Live Streaming on June 10 at 3 PM PDT. What’s more, Dr. Lisa Su discussed AMD high-performance computing and graphics ecosystem with industry leaders including Roanne Sones, GM of OS Platforms at Microsoft, Joe Hsieh, Global VP of Asus, and Jerry Kao, VP of Acer. She emphasized, “We have continued our strategic investment in the next generation core chiplets and are providing 7nm products to high-performance computing ecosystems with improved process technology. Also, AMD is excited to be working with industry partners to showcase the next generation of the Ryzen desktops, EPYC server processors and Radeon RX gaming card at Computex 2019.” ▲ The 12-core, 24-thread processor, ‘Ryzen9 3900X’, offers higher performance at lower power than the competing product. ▲ Specifications and Pricing of 3rd Generation AMD Ryzen Processor Family Finally, Dr. Lisa Su released a third-generation AMD Ryzen desktop processor that delivers breakthrough performance for gaming, productivity and content creation applications. The third-generation AMD Ryzen desktop processor, based on the new ‘Zen 2’ core architecture and AMD’s chiplet design, has improved performance in all applications by 15 percent more IPCs (Instructions Per Cycle). More than twice the size of the cache reduces the memory latency to provide a pleasant gaming experience. It also delivers the fastest speed with more than double the floating point performance. Besides, all third-generation Ryzen desktop processors support the world's first PCIe 4.0, providing the latest motherboard, graphics and storage technologies to PCs. The third-generation AMD Ryzen desktop processor family consists of 6-core Ryzen5 models (3600, 3600X), 8-core Ryzen7 models (3700X, 3800X) and a 12-core Ryzen9 model (3900X). The 8-core, 16-thread Ryzen7 3700X operates at up to 4.4GHz with thermal design power (TDP) of only 65W, the 3800X operates up to 4.5GHz and consumes 105W. The 3800X is introduced to show 34% better performance than the previous generation, 2700X. A flagship Ryzen9 3900X, which is the first personal desktop processor to have 12-core and 24-thread, operates at up to 4.6 GHz and consumes 105W, equivalent to 3800X. Dr. Lisa Su stressed that the 3900X has a single core speed of 14% faster, multi-core speed of 6% faster, and a power consumption of 60W less than the competing product. The price of Ryzen9 3900X is $499, Ryzen7 3800X is $399, 3700X is $329, Ryzen5 3600X is $249 and 3600 is $199. The official launch will be July 7, 2019. ▲ After the presentation, Dr. Lisa Su and the officials from AMD's partner companies gathered together to take a commemorative photo.

Technical Overview of the 2nd Gen Xeon Scalable Processors

Intel hosted ‘Intel Data-Centric Press Workshop’ at Intel Jones Farm Campus in Hillsboro, Oregon, US on March 5 and 6, and introduced Intel's portfolio of solutions for the data-centric era. At the workshop, the technical characteristics of the next generation processors and platforms including Next-Gen Xeon Scalable Processors and Optane DC Persistent Memory were introduced. Changes in IT technology, from IoT, cloud to 5G and artificial intelligence, are creating trends of explosive data growth, and the ability to handle this data is being linked to competitiveness. In addition, transition to cloud computing, increase of the use of AI and analysis, and cloudification of network and edge are driving the demand for change of IT infrastructure. Intel expects to have the largest market opportunity ever in this data-centric era, with a total size of $200 billion. Therefore, Intel introduced plans to offer software and system-level optimized solutions that can process everything, store more, and move faster to prepare for this market. As a new portfolio for the data-centric era, Intel introduced the second-generation Xeon Scalable Processors, a new Xeon D-1600 processor, Agilex FPGA, Optane DC Persistent Memory, Optane DC SSD, QLC 3D NAND-based DC series SSD, and 800 series Ethernet adapter. The new Xeon Scalable Processors, Optane DC Persistent Memory, Optane DC SSD, and Ethernet technologies are expected to provide superior performance and efficiency in a variety of workloads through being tightly coupled in a system-level and optimization of software levels. What’s more, these innovations will be available faster through ‘Intel Select Solutions’ with a proven, optimized configuration. ▲ Known as the codename ‘Cascade Lake’, the second-generation Xeon Scalable Processors ▲ Ian Steiner, who took a lead architect of ‘Cascade Lake’ ▲ Major technical features of the 2nd-Gen Xeon Scalable Processors Ian Steiner, a lead architect of the second-generation Xeon Scalable Processors, introduced about the second-generation Xeon Scalable Processors, which are also known as the codename ‘Cascade Lake’. He first compared the situation when Sandy Bridge-based Xeon E5-2600 series was introduced seven years ago to the present situation. At that time, it was in the early stages of cloudification, but now, cloud is activated in all the areas. Also, while the power consumption was important seven years ago, all of the parts are counted as ‘cost’ now. In addition, the fields where heavy computing power is required have been expanded to HPC, Analytics, AI and everywhere, and the usage of the workload specialized custom processor has increased. The second-generation Xeon Scalable Processors provide improved performance, scalability and efficiency based on the features or platforms of the existing Skylake architecture. As for the memory support, the support capacity has doubled with 16Gb DDR4, and memory controller operation speed has increased up to DDR4-2933. The performance of AI inference has been greatly improved through AVX-512 VNNI and DL Boost technology, and the hardware-level countermeasures against vulnerabilities such as meltdown and spectre were applied. Moreover, although it uses 14nm process, there has been an improvement to achieve higher operating speed and power efficiency. The second-generation Xeon Scalable Processors offer up to 28 cores in the 8200 series and up to 56 cores in the 9200 series. The features including cache configuration, maximum three 10.4GT/s UPI connections for die-to-die connectivity and maximum 48 lane PCIe connections are maintained. The maximum memory capacity has been increased with the support of 16Gb DDR4, and the operating speed has increased with the support of 6-channel DDR4-2933. With Optane DC Persistent Memory, it supports up to 4.5TB of memory configuration per processor. On top of that, vector operations can handle 16 DP, 32 SP and 128 INT8 MACs with DL Boost in a single cycle via AVX-512. First introduced in the second-generation Xeon Scalable Processor family, the Xeon Platinum 9200 series processor is in the form of two processor die in one package and linked with UPI. Supporting up to two processor configurations, the Xeon Platinum 9200 Series is logically identical to the existing four-socket system in dual-processor configurations, but can be configured for higher compute densities in terms of latency or smaller form factors. The memory controller provides up to 281 GB/s of bandwidth in a 12-channel configuration per processor, utilizing both die. The Xeon Platinum 9200 series is supplied in a BGA bonded form on the motherboard and has a TDP of 250 to 400W. ▲ VNNI allows to complete the inference-related operations that took three cycles in a single cycle ▲ Software optimization and hardware support can lead to significant inference performance improvement Matrix multiplication, which is mainly used in a deep learning environment, is a process of collecting values obtained by multiplying a plurality of rows and columns into a single value. And in traditional HPC or AI training workloads, floating-point operations were used here. In this case, the wide range of possible values was a drawback in performance. On the other hand, in the case of using INT8 instead of FP for inferencing, greatly reduced range of values to consider, higher power efficiency through fewer multiplications, and reduced pressure on cache and memory subsystems were mentioned as the advantages. When AVX-512 and VNNI are used in the second-generation Xeon Scalable Processor, it becomes possible to achieve four times better performance than AVX2 in the operation of receiving INT8 value and outputting to INT32. Previously, INT8 value was input to obtain the result of INT32. The result is obtained through three stages of multiplication, up-conversion, and accumulation, and up to 128 MACs are processed using two ports and three cycles per core. However, when using VNNI, these three steps can be processed in a single cycle with a single instruction, which in theory can triple the performance. When using MKL-DNN library, it is possible to improve performance by 1.33 times by switching from AVX-512 based FP32 to INT8, and by 3 times by switching from AVX-512 based INT8 to VNNI-based INT8. Intel introduced that in the micro-benchmark scenario of MKL-DNN, the performance per watt can be greatly increased by utilizing VNNI. When VNNI is used, the power consumption per socket becomes similar to that of FP32, but the power consumption per unit performance is greatly reduced as much as the greatly improved performance. In addition, when DL Boost technology is used, the processor's L2 cache miss probability is significantly reduced than FP32, and memory bandwidth usage also decreases. ▲ The memory bandwidth allocation function is added to Intel Resource Director Technology. ▲ Types of speed shift technologies applied to N-series products mainly specialized in network workload ▲ Types of speed shift technologies applied to Y-series products specialized for data centers Optane DC Persistent Memory, which is officially supported from the second-generation Xeon Scalable Processors, can be used in two modes; ‘Memory Mode’, which uses DRAM as a cache to expand the total memory capacity, and ‘App Direct Mode’, which is a workload-optimized form that allows applications to directly access DRAM and Optane DC Persistent Memory according to the purposes. It is compatible with DDR4 interface and 128~512GB module will be introduced. At the same time, Intel emphasized that in the development of Optane DC Persistent Memory, processors and modules were developed together from the beginning. Intel Resource Director Technology (RDT) has also added a new technology. By using RDT, it is possible to divide the processor area so that it does not affect the performance of each job. By prioritizing and processing jobs, it is possible to maximize system utilization while maintaining SLA compliance. Moreover, RDT allows for monitoring and controlling of L3 cache and memory bandwidth. In the second-generation Xeon Scalable Processors, the Memory Bandwidth Allocation technology is added to allocate or limit the memory bandwidth for specific tasks, minimizing the performance impact of specific tasks across the entire system and ensuring compliance with SLAs. The Intel Speed Select Technology (SST) for workload-optimized environments is made up of three specific technologies, and the application of each technology depends on the product family. Among them, SST-CP maintains a higher operating speed for priority tasks and slows down the processor operation in other lower priority tasks, while SST-BF (Base Frequency) sets a certain core to a higher operating speed and assigns a specific workload to it. With this technology, the total power consumption can be kept at a constant level while providing the optimum environment for workloads that are sensitive and non-sensitive to operating speeds. SST-PP allows the flexibility of processor selection and server operation, and it can separately set up the maximum temperature, TDP, operating speed, or the number of cores activated by up to 3 profiles in one product. This allows choosing among the settings such as a setting of reduced number of actives cores in the processor and increased operating speed and a setting of lowered operating speed and maximized number of active cores according to the situation. In terms of the usage of this technology, it was introduced that it is possible to boot the server and provision the workload by selecting the profile of SST-PP in Ironic, an OpenStack bare-metal provisioning system. The benefits of this technology include enhancing flexibility in the infrastructure that handles workloads with different characteristics and changes. ▲ The dual processor configuration of Xeon Platinum 9200 series processors is logically consistent with the existing 4-socket configuration. Kartik Ananth, Senior Principal Engineer of Intel Data Center Group, introduced about Xeon Platinum 9200 series processors and platforms. One of the most significant features of this processor is the fact that it has excellent processor performance per socket by configuring two second-generation Xeon scalable Processor die into a single processor and socket. In addition, two die configurations can achieve twice the memory bandwidth per processor, yet each die is accessed with single hop latency. So, if the 'density' of computing power is important, it is possible to achieve equal capacity with less area than the existing 4 socket configuration. The Xeon Platinum 9200 processor has a configuration of two die connected via UPI to a single processor. It supports up to two processor configurations, which are logically identical to the existing four socket configuration, with three UPIs per die connected directly to the other die. It has a 6-channel DDR4 memory controller per die, so it becomes a 12-channel DDR4 memory controller on a per-processor basis. The processor package is a BGA with 5903 contacts using 0.99mm pitch, which will be supplied at the system level with the motherboard. The Intel Server System S9200WK, featuring dual processor configurations of the Xeon Platinum 9200 series, offers up to 80 PCIe 3.0 lanes. The Xeon Platinum 9200 series processors are available in 32-, 48- and 56-core configurations and feature 12-channel DDR4 memory controllers on all processors, delivering outstanding performance on memory performance-intensive workloads. Intel's test results show up to 407GB/s STREAM-TRIAD performance on dual processor configurations. Memory bandwidth per core is allocated at 3.6 GB/s per core on a 56-core processor and 6.2 GB/s per core on a 32-core processor, providing a favorable environment for memory bandwidth sensitive applications such as HPC applications. Furthermore, the entire TDP can be extinguished with a single heat sink in all product families. ▲ Main features of Intel Server System S9200WK for Xeon Platinum 9200 ▲ Xeon Platinum 9200 series processors are based on the system level configuration. Xeon Platinum 9200 series processors come with Intel Server System S9200WK. The S9200WK is a 2U rack form factor with up to four independent compute nodes depending on the node configuration, and each node is capable of warm-swap. Memory is available in 12-channel configurations with 12 DIMMs per processor, and storage can use two hot-swap U.2 NVMe SSDs per module in a 2U compute module. The power supply uses 3 units of hot-swap 2100W or 1600W in the chassis and has both air and liquid cooling options. The compute modules include 1U 1/2 width liquid-cooled compute sled, 2U 1/2 width liquid-cooled service sled, and 2U 1/2 width air-cooled compute/service sled. Hot-swap storage is only available in 2U compute modules, and NVMe has 2 M.2 per node in 1U, 2 M.2 and 2 U.2 in 2U. PCIe extensions can use two LP PCIe cards per node in 1U and four LP PCIe cards per node in 2U. Intel Server Chassis FC2000 is Intel's disaggregated server configuration, offering power and cooling in a shared form, with three unit configurations of 1600W or 2100W for high availability and air or liquid cooling options provided. In aspects of software architecture-wide optimization, Xeon Platinum 9200 processors have additional information on multichip packaging in CPUID. As a result, the Xeon Platinum 9200 processors with two die might recognize as two processors but the information makes it possible to logically recognize and operate as a single physical package. In addition, benefits such as the DL Boost technology of the second-generation Xeon Scalable Processors, AVX-512 support, and various software optimizations for AI can be obtained equally through Xeon Platinum 9200 processors. On top of that, AI inference performance of Xeon Platinum 8280 processors is 14 times higher than that of Xeon Scalable Processors in the early period, and Xeon Platinum 9282 achieves 30 times the improvement.