Wi-Fi Urgently Needs More Spectrum
QNAP Launches High-speed Dual Band Wi-Fi 6 PCIe card for NAS and PC
Primestream Joins SRT Alliance to Work Toward High-Quality, Low-Latency Internet Video Streaming for Remote Workflows
UKWISPA showcases Terragraph from Facebook as first three global vendors prepare Fixed Wireless Access for Gigabit Britain
Yamaha UC Helps Connect Assisted Living Facility Residents to Loved Ones
QNAP Launches High-performance quad-core 1.7 GHz NAS for Reliable Home and Personal Cloud Storage
Harmonic Providing Free Access to Video Streaming Innovation During Global Lockdown
Infinet Wireless Delivers High Definition CCTV Network for Busy British Town Center
ProLabs to advance 400G network infrastructure with launch of high-density optical transceiver at OFC 2020
NGMN Alliance and ESOA Members Collaborate to Extend Rural Connectivity With Non-terrestrial Networks
Neural Technologies, “Rating reconciliation is critical to maximize 5G services and avoid revenue leakages”
To manage the growing number of 5G end user services, Communications Service Providers (CSPs) must consider ‘real-time’ digital rating and charging processes to fully capitalize on new 5G revenue streams and avoid revenue leakages, says Paul Cox, Business Development Manager at Neural Technologies, at RAG Delhi Conference, 5 – 6 February 2020. With the emergence of 5G, customers are demanding increasing levels of personalized end user services, reflecting in many new revenue streams for CSPs. Speaking at the RAG Delhi Conference, Cox explains how to manage increasing volumes of complex end user services for effective management of charging and billing processes. “To truly capitalize on the many opportunities of 5G use cases, CSPs must adapt with the latest digital transformation technologies to create a ‘real-time’ experience of service delivery and payment to foster long-term customer loyalty,” said Paul Cox at Neural Technologies. “To achieve this, CSPs need to create a near real-time convergent rating and charging strategy which can match service demand using the latest Artificial Intelligence (AI) and Machine Learning (ML) technology.” Detecting revenue leakages is key to maximize on the potential of new service revenue streams. With volumes of customer transactions, Neural Technologies’ Rating Reconciliation solution supports the management of tens of thousands of data records per second which is critical in order to process and charge customers accurately for the services they opt into. Neural Technologies’ Rating supports legacy and next generation environments, any type of device and any type of service and payment method to fully support the management of volumes of data records. The Optimus Charging solution, based on the Optimus Platform, provides a fully configurable solution covering all processes involved with the charging requirements. Including configurable input formats, convergent mediation, complex tariff plans to accurately charge the customers services. This is critical to support growing end user usage and execute efficient, accurate charges to foster customer loyalty. Paul Cox at Neural Technologies addressed the challenges of revenue management with Optimus Revenue Assurance Rating Reconciliation during his presentation at RAG Delhi Conference 2020 on the 5th February.
NetFoundry’s Cloud Gateway Now Available at Digital Ocean Marketplace
CHARLOTTE, NC –September 5, 2019 – Solution providers can now spin up programmable, zero trust networking across the Internet via NetFoundry’s 1-click application on the DigitalOcean Marketplace. DigitalOcean’s Marketplace is a platform that connects developers with easy-to-use, partner-built solutions to simplify and accelerate app development, deployment, and scaling. Independent Software Vendors (ISVs), SaaS providers, Solution Integrators (SIs) and Managed Service Providers (MSPs)whom host their apps in DigitalOcean use NetFoundry to enable private network, zero trust connections between their apps and their customers, without requiring their customers to configure and manage VPNs, which are both expensive to operate and often damage the performance of the application. Rather than nailing up 20 VPNs to 20 customers, the ISV uses NetFoundry’s APIs, SDKs or web console to centrally and programmatically manage access across all tenants in a least privileged access paradigm. “Digital Ocean and NetFoundry share a goal to enable developer innovation, and the combination of our services enables developers, ISVs, SIs and MSPs to enjoy a new art of the possible in which they leverage private application connectivity without requiring their end customers to nail up VPNs, private circuits and custom hardware.” said Galeal Zino, CEO of NetFoundry. “As developers and small- and mid-sized businesses turn to modern apps to power their latest projects, we want to help make app creation easier from start to finish. By building upon DigitalOcean’s Developer Cloud to simplify infrastructure, NetFoundry lets developers instantly connect distributed applications securely in any cloud or device in just one click," says Nick Wade, Head of DigitalOcean Ecosystem & Marketplace. NetFoundry’s global Fabric is accessible from any Internet connection via SDK or software endpoints, and functions as a zero trust Internet overlay with optimized performance. NetFoundry manages the Fabric as a service (Nework-as-a-Service), while developers control the Fabric via API, SDK or web console, often simply using NetFoundry’s APIs in their DevOps and cloud orchestration tools such as Jenkins and Ansible (Connectivity-as-Code). “NetFoundry offers a better way for developers to securely scale and deliver apps,” said Greg Shields, Director of Strategic Partner Alliances for NetFoundry. “While legacy networking is fine for legacy use cases, it was not built to effectively deploy applications across hybrid topologies. Trying to use legacy WAN for distributed apps blocks innovation and business benefits due to a variety of issues. Issues such as long implementation timelines, difficulty in scaling, proprietary hardware, complex and error-prone architectures, compromised performance, increased attack surfaces, incompatibility with DevOps, and potentially higher than expected costs are all common.”
InfiNet Wireless to showcase high performing mining solutions at Mining Week Kazakhstan
InfiNet Wireless, the global leader in fixed wireless broadband connectivity, will showcase its ground-breaking wireless solutions for the mining industry at Mining Week Kazakhstan 2019. InfiNet Wireless will present its Mobile Video Complex and Vector 5 solutions at the event, also known as the 15th International Mining Exhibition for Mining and Exploration, Mineral and Coal Processing and Metallurgical Technologies. Taking place in the city of Karaganda, Kazakhstan from 25-27 June, the event will unite mining experts with a focus on the latest solutions for advancing connectivity within mines. “We are looking forward to sharing our cutting-edge solutions with the mining industry at this leading event. The Mobile Video Complex solution offers mining companies with priceless benefits, including optimization of maintenance costs and operation in harsh climatic conditions and aggressive environments,” said Roman Smirnov, Chief Commercial Officer at InfiNet Wireless. “The scalable and flexible system, which can be easily deployed avoiding high costs, has a communication channel range of up to 100km and a communication distance with fixed objects of up to 6km at a vehicle speed of 90km.” InfiNet Wireless’ Mobile Video Complex solution is a specifically crafted solution for the mining industry which provides organization of the main communication channels for mining companies. It also consolidates all geographically distributed objects of the enterprise in a single information network and operates a CCTV system. Furthermore, the Mobile Video Complex system allows seamless communication with mobile objects and machinery including cars, excavators, drilling machines, railway transport and loading and unloading equipment. The Mobile Video Complex is compatible with InfiNet Wireless’ InfiLINK 2x2 PRO, InfiLINK 2x2 and InfiLINK XG wireless point-to-point solutions and its InfiMAN 2x2 point-to-mulitpoint solution. The solution has already been successfully deployed by of the largest thermal coal producers in Kazakhstan, one of the top five mining enterprises in Russia and one of the largest gold mining companies worldwide. InfiNet Wireless will also display its Vector 5 point-to-point solution which operates at 5 GHz and boasts a capacity of up to 450 Mbps in only 40 MHz channel, quality Full service (QoS) and processing power in excess of 800,000 packets per second. Vector 5 is specifically designed to offer the highest spectral efficiency available in the current wireless market and is capable of operating in temperatures from below zero to tropical conditions. Vector 5 offers a wide range of uses at a high-performance level, such as Internet access, multiservice networks, telemetry and high-resolution video transmissions in a CCTV infrastructure, making efficient use of the spectrum currently available in the market. InfiNet Wireless’ booth will be located at stand number 339 at the Multilogic Sport Complex Zhastar.
Launch of the digital service EVE: automated live captions with artificial intelligence
Filmgsindl invented EVE, as they experienced the difficulties of live captions during customer events. The objective was not only to reduce the oversized costs related to travel and external expenses of interpreters, stenographs and hardware, but to find a better digital solution as the quality of live captions related to humans often shows limitations. Thus, EVE does not only help support organizations or companies like MICROSOFT to hit Accessibility Standards and lower costs, EVE also is an additional medium. The digital service captures every spoken word and shares a transcript (PDF version) directly after the speech for further actions, like articles, event film subtitles and SEO. That content makes events, speeches and video-libraries completely searchable and can improve reputation and image, related to the digital footprint. Nowadays everybody posts every thought on twitter and shares pictures on instagram. But now it is the time to digitalize the spoken word. In order to guarantee the quality of the text output, it is possible to use one or more online correctors. These editors can improve the quality even more, as it is possible to correct the text live, from anywhere. EVE learns constantly through machine learning. The basic language model is optimized constantly, and its results improve accordingly. EVE also memorizes the corrections and individual dictionaries can be uploaded to teach EVE vernacular. Thomas Papadhimas: “It is 2019, and thus long overdue to offer a digital service which automatically generates live captions of videos, events and lectures, etc. The service is easy to use with common platforms and devices, independent from OS and cost efficient. Globally many people rely on captions, but subtitles are rare. EVE will change that and make the world a better place, as inclusion is not negotiable” So far EVE works in English and German but the live machine translation for other languages is already availble in a BETA Version. An updated feature appears on the roadmap which improves recognition based on the learnings of the human corrector as well. More details will be shared soon.
KISO Released the Results of Its Survey on User Awareness of Naver’s Search Word Service
On February 19, the Naver Search Word Verification Committee of the Korea Internet Self-governance Organization (KISO) held a press conference for releasing the results of the survey on user awareness of Naver’s search word service at the Conference House Dalgaebi, located in Jung-gu, Seoul. The Naver Search Word Verification Committee, which has been verifying the appropriateness and validity of Naver's search word service since 2012, tried to apply consistent standards and principles in its verification process, but there have been cases in which value judgments and opinions are opposed sharply within the committee or the limits of drawing realistic standards are encountered. Therefore, the necessity of user awareness arose to find out the public sentiment except for the experts’ opinions and what the public’s commonsense criterion is. The survey was conducted by the investigation company Macromill Embrain on 2,000 men and women from 19 years old to 60 years old. The survey questions were composed of a) usage and evaluation of search word service and b) opinions of search word service policy. ▲ The press conference on user awareness survey of Naver’s search word service was held. First, as for the results of the usage of search word service, users use the whole search word service about '2 to 5 times' on average per day, and it does not show any difference according to age, sex, region and political tendency. However, heavy users who use ‘more than 10 times’ a day were mostly ‘20s and 30s’. In the case of the real-time search, the result shows that users look for the ranking of the search words to check recent topics or issues, and then click on ‘1 to 3 search words (45.7%)’ to obtain relevant information. On the other hand, the overall reliability was not as high as satisfaction with the service. The survey according to the age group shows that the satisfaction and reliability of the 50s were the highest, and the reliability of the 20s was low. As for the evaluation by each search word service, 'Autocomplete Search> Related Search> Real-time Search' appeared in the order of all evaluation items (degree of use, usefulness, convenience, necessity, intention to use). Next, the section on opinions of search word service policy was divided into 1) judgment about deletion of search words, 2) portal provider’s intervention in search word service, and 3) opinions on portal provider’s policy of search word service. 1) Judgment about deletion of search words Among the search words that were exposed as main ads, there were some differences in opinions according to the nature of the public service event and the commercial nature, but more than half answered 'Leave it as it is.' In order to protect children and adolescents, 87.1% of respondents answered that they should delete the adult video title, and 13% of the respondents said that they should leave it. In addition, for copyright protection, when the title or episode of popular content is uploaded to the real-time search, respondents answered it should be deleted (58%) because the illegal download method etc. appear in the search result. In case of accidents like suicide and sexual violence that could become national concerns, the answers on whether to expose local names and school names as search words showed high percentage of ‘Leave it as it is’. In detail, 5% higher response rate was shown in ‘exposure of school names’. Moreover, about 80% of respondents said that the names of hospitals where medical accidents occurred or the names of companies or products under investigation should be exposed as search words. There were many opinions on the names of celebrities and specific body parts should be deleted because of defamation and insult. On the other hand, there was a high percentage of the opinion that the search words for celebrities' romance rumors, accidents complicities, crime-related cases should be left as they were. The interesting point related to the search words for celebrities' search words is that the opinions of the search words of the celebrities who separated long ago should be 'deleted (63%)'. In this regard, the verification committee said, “The judgement on deleting search words from the public is more reasonable than the experts expected.” In the case of the real-time searches that became controversy after anonymously reported on media, the answer of ‘Leave it as it is’ showed high percentage for the troubled religious people, restaurants, and celebrities. On the other hand, as for the names of issued celebrities at entertainment shows, the difference between ‘It should be deleted (42.5%)’and ‘Leave it as it is (57.5%)’ was not significant. In addition, in the case of antisocial and illegal criminality search words in which a specific person’s name is mentioned with search words like ‘garbage, suicide, communist, etc.’ the answer of ‘It should be deleted’ was remarkably high. The search words related to Sewol Ferry were judged different depending on the case. In case of disgusting expressions of Sewol Ferry, 78.8% of the respondents answered that they should be deleted, but in case of the search word ‘7-hour theory of Sewol Ferry’, 61.4% answered ‘Leave it as it is’. As for the search word ‘human sacrifice theory of Sewol Ferry’, 62.8% answered ‘It should be deleted’. 2) Portal provider’s intervention in search word service According to the survey on whether portal providers should intervene in defamation search words, the results were different depending on jobs. Only the case of the ordinary citizen (57.6%) showed high percentage of the answer ‘need intervention’, and the responses on ‘need intervention’ in the other categories showed as follows: independent creator (39.6%)> celebrities (36.9%)> businessman (27.6%)> politicians (26.6%)> high-ranking officials (26.5%). Meanwhile, in the section on intervention on the deletion of the real-time search related to the protection of users, the intervention on 'youth protection (80.8%)' showed the highest percentage. As for portal provider’s intervention in deleting search words for protection of privacy and personal information, the answers for high-ranking officials (34.1%), politicians (35.1%), and businessman (37.2%) showed low percentage of ‘need intervention’, and the answers for celebrities (49.0%) and independent creator (52.2%) showed similar percentages of ‘need intervention’ and ‘nonintervention’. On the other hand, the opinions to ‘intervene’ for ordinary citizens’ privacy were high with 69.8%. 3) Opinions on portal provider’s policy of search word service As a result of the survey on the portal provider's search word service policy on a scale of one to five, the answer of “portal provider has responsibility to manage the search word service” got the highest score with 3.99, and that of “portal provider should manage fairly” got the lowest score with 2.66. As with previous reliability surveys, it is shown that users have low reliability in the management of search words of portal providers. In this regard, the verification committee said, “Because that giving a feeling of intentional engagement to users is the thing that portal providers fear the most, they always sugarcoat by saying ‘There is no manipulation’, which means that there is no bad intention to manage it. However, because users think that they do not touch search words at all, reliability has been lowered due to the differences in understanding of the management of search words between users and portals. If a portal provider transparently presents the criteria for managing search words, it will help users to improve their reliability.” For the most appropriate method for the fair management of the search word service, 'Management according to the principles of self-responsibility of portal providers (37.7%)' got the highest percentage. For discontinuation of portal providers’ search word service, the answers mostly showed negative position: ‘It should be continued’ got 63.7%, and ‘It should be discontinued’ got 7.5%. Moreover, respondents answered that a portal provider should ‘reveal (79.0%)’ management principles of search word service or standards of search word deletion. The answers of ‘I don’t know’ got 10.4% and ‘No need to reveal’ got 10.4%. Lastly, when asked about the necessity of external institutional verification of Naver's search word service, 87.1% answered that 'external verification is necessary'. After releasing the results of the survey, Naver Search Word Verification Committee presented political proposals. “First, the main goal of the policy of the search word service should be aligned with 'protection of users and promotion of interests'. Second, it is necessary to continue the process of gathering opinions of various interest groups in the policy of the search word service. Third, the principle of 'nonintervention' in the policy of the search word service needs to be reconsidered. Fourth, it is urgent to improve the reliability of the search word service.”
NCsoft AI Media Talk
NCsoft held an AI media talk event at its R&D center in Seongnam, Gyeonggi-do on March 15th, 2018. The event was organized to introduce NCsoft's current status and vision of AI research and development. The officials including Un-hee Han, Chief of Media Intelligence TF, Jae-jun Lee, Director of AI Center, and Jung-seon Jang, Chief of NLP Center, from NCsoft attended the event. “NCsoft has started AI research in 2011 and organized an event today to introduce what it has done so far. I would like to inform of the researches on AI that NCsoft has been preparing for a long time.” – said Un-hee Han, Chief of NCsoft’s Media Intelligence TF ▲ AI media talk of NCsoft was held. According to the presentation, NCsoft is currently researching AI with the main focuses on AI Center (Artificial Intelligence Center) and NLP Center (Natural Language Processing Center). The AI Center and NLP Center, which have a total of more than 100 researchers, are operating five organizations under the direct control of CEO Taek-jin Kim. The AI Center is composed of Game AI Lab, Speech Lab, and Vision TF, and the NLP Center is studying technology areas through Language AI Lab and Knowledge AI Lab. In the Game AI Lab, the researches on AI technologies for game development and services such as game playing AI, AI for game planning, AI for game art development, based on reinforcement learning, deep learning and simulation technology, are being conducted. By applying AI to ‘Infinite Tower’ content of ‘Blade & Soul’, the environment where users can duel with AI was created. Recently, through deep reinforcement learning technology, which is the combination of the existing reinforcement technology and deep learning, the performance of AI is improved and a combat AI that gives a similar feeling to people by using combat logs of users are being developed. In the Speech Lab, the researches on voice, speaker, and emotion recognition technology that recognizes the language, speaker, and emotion information contained in the voice signal, and voice synthesis technology that converts a text into human voice such as natural dialogue and emotional voice are being conducted. The lad is also studying how to use this technology in the process of game development and play. In the Vision TF, the researches on images and video, such as AI recognizing images or video, or creating images using generative adversarial network (GAN) technology, are being conducted. This is the case that an AI automatically assigns tag information to graphic resources, performs coloring (sketch auto-coloring), and automatically generates the necessary images. ▲ Game AI Lab applied AI to ‘Infinite Tower’ content of 'Blade & Soul'. In the Language AI Lab, the researches on various application technologies to exchange information in human languages, such as Q&A, dialogue, document summary, and story creating technology as well as natural language processing technology, are being conducted. Beyond simply asking questions and letting AI answer, this lab tries to make AI grasp the significant parts of the text and summarize it. In the Knowledge AI Lab, the researches on the technologies that guesses, generates and delivers new knowledge from meaningful knowledge, which is extracted and saved from various data such as text, are being conducted. Meanwhile, NCsoft plans to expand and strengthen its investments in the fostering and R&D of AI researchers. To this end, NCsoft is actively recruiting talented individuals. The AI Center and the NLP Center are working closely with 12 research laboratories in the domestic AI field such as Seoul National University and KAIST. In recent years, Hae-chang Lim, the former professor of Korea University Department of Computer, who is Korea's top authority in the field of natural language processing, has joined NLP center as an advisory professor. At the same time, the firm will continue to share the status of such researches not only within the firm but also outside including the academic community. Actually, it had held ‘NCSOFT AI DAY 2018’ on February 22nd and 23rd and shared the current status of the research development with about 200 employees of NCsoft and 100 people including domestic graduate school professors in industrial cooperation and master and doctoral students.
Google AI Forum 10th Round: AI Innovation and Computational Photography
On February 28, 2018, Google hosted ‘Google AI Forum 10th Round: AI Innovation and Computational Photography’ at its office in Gangnam-gu, Seoul. At the conference, Marc Levoy, Distinguished Engineer of Google, gave a video lecture on how AI technology is integrated into a photograph that can record our everyday lives and memory. ▲ ‘Google AI Forum 10th Round: AI Innovation and Computational Photography’ was held. According to the announcement, Google presented a ‘portrait mode’ that combines machine learning and computational photography technique with a new Pixel smartphone. This 'portrait mode' automatically applies a soft out-focus effect to the background so that the person can be highlighted. This helps the camera focus more on the subject than on the cluttered background and allows the photographer to take more artistic pictures. This 'portrait mode' greatly improves the photographs through four steps. By using each process efficiently through AI, it shows better results to users. The first step is to create an HDR+ image through a photo shoot. HDR+ is Google's computational photography technique to improve the quality of photographed pictures. The way to prevent losing highlights, HDR+ captures several under-exposed images, aligns, averages and merges the frames of the captured images to reduce noise in the shadows. As a method of reducing global contrast while preserving local contrast, it also amplifies these shadows to get pictures with high dynamic range, low noise, and sharp details even in dim lighting. The idea of aligning frames to reduce noise has been known for decades, but Google introduced that its implementation is a bit different as it is achieved on a photo by handheld camera. ▲ HDR + is Google's computational photography technique to improve the quality of photographed photos. The second stage is 'machine learning-based foreground-background segmentation'. Generally, which pixels belong to the foreground, typically a person, and which belong to the background are decided. Here, there is a tricky problem, because the background cannot be assumed as a certain color such as green or blue, unlike the chroma keying (a.k.a. green screening) in the movie industry. Instead, Google applied machine learning. Google has trained the process of estimating which pixels are human and which pixels are not human by using Convolutional Neural Network (CNN) written in TensorFlow. 'Convolution' means that the learned components of the network are organized in the form of filters (sum of weights of neighboring pixels around each pixel), so you can think of networks as simply filtering images, then filtering the filtered images. The ‘skip connections’ allows information to easily flow from the early stages in the network where it reasons about color and edges up to later stages of the network where it reasons about high-level features (faces and body parts). Combining stages like this is important when you need to not just determine if a photo has a person in it, but to identify exactly which pixels belong to that person. CNN was trained on almost a million pictures of peoples with hats, sunglasses, and ice cream cones. The third stage goes through ‘calculation of depth using stereo algorithm’. The Pixel 2 doesn't have dual cameras, but it does have a technology called Phase-Detect Auto-Focus (PDAF) pixels, sometimes called dual-pixel autofocus (DPAF). It works in a way of splitting every pixel on the image sensor chip into two smaller side-by-side pixels and reading them from the chip separately. Unlike many cameras including DSLR that use PDAF technology to focus faster on video recordings, it uses to compute depth maps. PDAF pixels give you views through the left and right sides of the lens in a single snapshot and uses left-side and right-side images (or top and bottom) as input data to a stereo algorithm like that used in Google’s Jump system panorama stitcher. This algorithm first performs subpixel-accurate tile-based alignment to produce a low-resolution depth map, then interpolates it to high resolution using a bilateral solver. ▲ It goes through ‘calculation of depth using stereo algorithm’. Lastly, the fourth stage ‘puts all together to render the final image’. This step is to combine the segmentation mask computed in the second step with the depth map computed in the third step to decide how much to blur each pixel in the HDR+ picture from the first step. The rough idea is that the subject considered to be a person stays sharp, and the subject considered to be the background is blurred in proportion to how far it is from the in-focus plane, where these distances are taken from the depth map. The application of blur is replaced with translucent disks with various sizes. By compositing all these disks in depth order, it is possible to get the approximation to real optical blur. On the other hand, engineer Marc Levoy presented tips for shooting a nice portrait. First, close enough to the subject that head fills the frame, and for group shots, the subject must be placed at the same distance from the camera. Also, for the blur effect, you should put a little distance between the subject and the background, and you should take off dark sunglasses, a wide brim hat, and a big scarf. In addition, when taking close-ups, the focus should be adjusted so that the subject of interest remains sharp. After the lecture, Marc Levoy said, “It is true that mobile phones are yet impossible to completely replace professional cameras due to technical and mechanical limitations, but it is possible to show users a certain level of photos. This is important for widening the user's choices, and the machine learning and computational photography technique are at the center.”
Shutterstock Held Press Conference for Strategy Announcement
Shutterstock held a press conference at InterContinental Seoul COEX, Gangnam-gu, Seoul, on the morning of December 8th, 2017. The event was organized by Shutterstock, currently participating in Seoul Design Festival, to introduce its strategy and was attended by Yvonne Januschka, Asia Pacific Sales Director. Yvonne Januschka, Asia Pacific Sales Director of Shutterstock, said, “Shutterstock has been adding digital media such as high quality photos and illustrations since its establishment in 2003 and is strengthening its position as a creative platform based on the latest technology. Korea is a very important market for Shutterstock, and we are working hard to make a better business relationship. Thank you for your interest in Shutterstock.” ▲ Shutterstock held a press conference for strategy announcement. ▲ Yvonne Januschka said, "Since Korea is a very important market, we are working hard for a good relationship." According to the introduction, founded in 2003 by Jon Oringer in New York, Shutterstock provides high-quality licensed photos, vectors, illustrations, icons, videos and music to professionals in corporate, marketing agency and media around the world. Shutterstock also provides a separate business solution for workflow enhancements of business and agency through Shutterstock Premier Platform. Currently, it has more than 160 million images and 8 million videos from more than 300,000 contributors and an average of 150,000 new images are added every day. Approximately 1.7 million customers are using Shutterstock in more than 150 countries around the world, and the number of downloads has reached 500 million so far, equivalent to 5.5 images downloaded per second. Korea is in the top 5 of Shutterstock's Asian market and about 1,000 participants are working at Shutterstock. Recently, Shutterstock's mobile app has been updated to make it easier for domestic users to browse and download images whenever and wherever they need them. In partnership with Shutterstock, Shutterstock provides opportunities for participants to develop through space and activity support where they can profit from their talents, and participants enrich Shutterstock's library. In the meantime, Korean illustrator Kim Yeon-hee was selected as a representative of Asia in 2017. ▲ Shutterstock is a creative platform based on digital media. ▲ Korean illustrator Kim Yeon-hee was selected as a representative of Asia in 2017. Shutterstock continues to evolve by introducing various innovative technologies without stopping providing images and videos. First, it introduced a search based on its own developed convolution neural network technology. In addition to searching for images by keyword, 'reverse image search' function allows users to search similar images of the look and feel by analyzing the algorithm of their images. Next, by introducing a new watermark generation function, it protects the participants' assets from the algorithm that Google revealed and watermark removal method using computer vision technology. Through API integration with Adobe Photoshop and Microsoft PowerPoint, users can also use Shutterstock photos and illustrations directly within each application. This allows creative professionals to design faster and smarter. In addition, beginning in 2013, it began supporting Facebook's basic ad creation platform through collaboration with Facebook, and this allowed advertisers to add professional images to the ads they post on Facebook at no additional cost. By partnering with Shutterstock Premiere, National Geographic has made it easier to find the right images for video themes and improved its workflow. BBDO, a global advertising agency, has also partnered with Shutterstock to deliver high quality content and boost the level of advertising campaigns. Moreover, through Shutterstock and API integration in 2016, Google has begun using Shutterstock images for its advertising platform, making it easier for users to use images that match their digital advertising messages. As for how Shutterstock will develop its activities in the Korean market, Yvonne Januschka said, “Shutterstock participated in the Seoul Design Festival and shared many exchanges with Korean designers and marketers. We are delighted to introduce Shutterstock's innovative technology to support these activities, and we will continue to introduce a variety of innovative features for Korean companies, marketers, designers, entrepreneurs and participants.”
Google AI Forum 8th Round: AI Innovation and Natural Language Processing
On December 5, 2017, Google hosted 'Google AI Forum' on the theme of 'AI Innovation and Natural Language Processing' at the conference room of Google Korea in Gangnam-gu, Seoul. At this forum, Google introduced ways and examples of how to improve the user experience through natural language processing using machine learning. Google has been conducting research on natural language processing (NLP) for a long time, focusing on the development of algorithms that can be applied directly to a variety of languages and domains. This system is used in a variety of ways across Google products and services, helping to improve the user experience. Google is dealing with overall traditional natural language processing tasks, also showing a strong interest in algorithms that work efficiently in highly scalable and distributed environments including universal syntax and semantic algorithms that support more specialized systems. Google's syntactic system predicts the morphological features of each word in a given sentence, such as tags of speech parts, gender, singular, plural, etc., and classifies them into subject, object, modifier, etc. Google is also focusing on efficient algorithms that use large amounts of unclassified data, and has recently introduced neural network technology. Google, on the other hand, has recently focused on improving text analysis by incorporating knowledge and information from a variety of sources, or applying frame semantics at the noun phrase, sentence and document level. ▲ Director Hadar Shemtov from Google Research Team Hadar Shemtov, Director of Google Research Team, pointed out “mobile” as the driving force behind a change in the user environment, and said that more than half of today’s queries are being generated in the mobile environment. He introduced that, as a result, the search result requires an immediate "answer" rather than a "link," and the movement of interaction to “interactive” is noticeable. Recently, the core works of Google are to recognize input values made by speech, convert them into text, understand them, and output the result in a voice form. The feature of the voice form query is that it is a longer form and is close to natural language. Sequential queries, which consist of conversational forms and refer to the elements of the previous question, have also been introduced as an important feature of voice form queries. Also, while the voice response technology corresponding to these queries is changing, the answer needs to be shorter and fluent at the user level. Likewise, Google has been focusing on two NLP elements: a way to take a long sentence and process it in short sentences, and a way to get high quality voice synthesis. In order to present an “answer” that focuses on the answer, it is necessary to reconstruct the long question with the natural form into proper form in a short and effective way. At this point, Google searches through related documents to search for answers from long questions, and then down to paragraphs and sentences related to the answers in the document. And simply put out the relevant answers. As a result, since additional searches are being made in the document, it can be seen as "search in search". The NLP system defines grammatical relations and groups between words in a sentence. What is important at this point is how to find the core of the sentence containing the desired answer simply. So, Google has grouped various words through the process, and then a single node value that is most likely to fit the context is figured out by using statistical processing through several examples and cases. In addition, through model construction that applied machine learning, it is possible to get the correct answer in grammatical terms as well as maintaining the essence of the sentence. Moreover, in the method of reducing the sentence, there is a need to decide whether to keep or discard each word in the sentence. By classifying all the words in the sentence and modeling signature values and examples of several sentences, the sequence-to-sequence value using LSTM can be checked. Consequently, a simple sentence with only the core will be produced by eliminating unnecessary parts. In this way, NLP system can summarize the sentences through the operations and derive simple, accurate values that include only the core. ▲ WaveNET technology, multiple layers between input and output, improves quality by combining multiple elements. In Google Assistance, the quality of voice output is very important as the assistance only uses voice-based interface. However, existing voice and text synthesis techniques used a method in which syllables are recorded separately, and then classified and re-combined when necessary, resulting in limitations in terms of quality. However, WaveNET, a probability-based new voice synthesis technology introduced by Google, uses digitized speech samples to acquire waveform information of speech, construct models and learn based on them. Then, the new text is applied with modeling, and finally high-quality results will be produced. In respect of voice, WaveNET technology recognizes the linguistic characteristics after vocalization and textization based on the waveform information, and proceeds with the voice synthesis process through the constructed model. Then, based on this model, when a new text is given, it is combined with modeling and the existing linguistic characteristics to grasp the new phonetic form and produce new voice. In addition, this algorithm has several layers between various input data and output data, and various factors are combined together to improve the quality of the result. He emphasized that although voice processing is a fairly expensive operation and cost, he did his calculations and was able to achieve a higher level of quality than traditional voice synthesis techniques. Moreover, in terms of "waveform", which is a morphological feature in the analog domain, by digitalizing it and charting sound wave through per-ms prediction method, it became possible to produce a sound output similar to the actual voice. ▲ Choe Hyunjeong, Lead of Google Computational Linguistics Team (NLU) According to Choe Hyunjeong from Google Computational Linguistics Team, Google is making a lot of effort in internationalization, introducing an assistant in about 15 countries, although the devices presented for each country are different. Google also introduced an assistant available in Android in Korea. In addition, in order to quickly launch an assistant in many countries, 'scalability' is important to make it easier to expand into more languages, including building a solid system and taking full advantage of data-based machine learning. About the process of globalization of assistant, Google enhances the entire language system by implementing the basic NLP system in English primarily and expanding to other languages after defining and designing functions to be implemented. Most of the systems that make up the assistant are using machine learning, and recently, the deep learning of the neural network model is also being used. In the model which is difficult to solve by the conventional rule-based machine learning such as voice synthesis, recognition, conversation model construction, etc. For both machine learning and deep learning, data is important for learning, and high quality data collected for the purpose is essential. Moreover, since Google Assistant is a conversational model, there is a need to consider more points in data. The aspect changes depending on whether it is a conversation between human and human, or human and machine. The data also shows a different pattern for the domain, such as the difference between spoken and written words, search words, news and blog data. It is also mentioned that paralleling data in multiple languages is necessary for extensions to various languages. ▲ 'Implicit Mention Detector', which can make the omitted part fit to the context Korean is one of the most difficult languages for data acquisition and modeling. In the case of English, the conversation between human and machine is not much different from the conversation between human and human, but Korean is different. In Korean dialogue, subjects and predicates are frequently omitted, and the difficulty of understanding the context is high. On top of that, there are various honorific expressions. Along with this, honorific forms are also diverse and complex, and there are subtleties of spacing and rhyme. Therefore, it is very difficult to understand and model these points in terms of machinery. So, Google is solving these difficulties with a knowledge-based model. Google introduced that it uses the machine learning-based 'Implicit Mention Detector' for omitting sentence elements that are common in Korean conversation, recognizing omitted parts in the sentence, and constructing it as a complete sentence. The system finds and displays all predicates and restores implicitly hidden pronouns. In this case, all the subjects are came out as restored state, and all the words referring to an individual are grouped by using 'Co-Reference' model. Through this, a number of omitted subjects or object words are restored and are being trained. In addition, when understanding human language, Google uses 'Query Matcher' for various expressions to understand similar meanings. It uses deep learning to understand various language systems by converting input values to vector values, grasping similar meanings through calculating distance at vector values, and finally grouping them in a single group. In addition to this, for the implementation of rhyme, Google is developing a model that can understood and implemented in a proper form in the modeling of phrases and rhyme.
Platform Meetup from Facebook Media Briefing Session
On the morning of November 3, 2017, Facebook held a media briefing session of ‘Platform Meetup from Facebook’ at El Tower in Seocho-gu, Seoul. Hosting a global workshop 'Platform Meetup from Facebook' event to showcase ways to maximize Facebook platform activities for domestic developers and start-ups, Facebook prepared the event for introducing features of the event and major cases by meeting medias with Christine Chia, general manager of APAC Platform Partnership on Facebook, in attendance. Christine Chia, general manager of APAC Platform Partnership on Facebook, said, “Facebook platform is the best tool for developers and founders to help them successfully reach their business goals and reach global markets. We promise to provide generous support for start-up companies to develop not only Korea but also overseas markets. ▲ Christine Chia said, “We will provide generous support for start-up companies through Facebook platform” According to the announcement, Facebook continues to provide support and opportunities for various partners such as developers, students, and companies to grow their business using the Facebook platform through ‘Facebook Platform Partnership’. Especially in Asia, including Korea, it establishes various tools for developers and start-ups to grow and focuses on developing programs and products that can support developer communities. As programs of ‘Facebook Platform Partnership’, ‘FbStart’ and ‘Developer Circles’ are running. First, 'FbStart' is a Facebook global program designed to help early stage startups build and grow their businesses. This program support through 3-step-process; “tools” that provides developers with tools and services they need for free, “support” that provides direct and exclusive mentoring with Facebook’s technical support coordinator who managed the start-up and succeeded as an entrepreneur, and “community” that provides opportunities to be connected with surroundings and co-workers through learning. Currently, about 6,000 start-ups in more than 130 countries are forming a global community through 'FbStart' program. Also, in Korea, Facebook launched ‘FbStart Seoul’ program in 2015 to support new mobile app start-ups with Facebook’s free development tools and mentoring throughout app planning and production process. ‘Bootstrap’ program for start-up companies and ‘Accelerate’ program for finding development after identifying initial business value are running. Facebook is also providing free services to outstanding companies, which offers Facebook advertisement and PAS advertising credit, product test, recruitment, customer management, video conference and document management. Next, 'Developer Circles' is a networking and growth program for developer communities, providing a global network of regional developer communities that use discussion forums on Facebook developer tools and services and share knowledge. This program helps them discover a more comprehensive developer community and promotes positive emotions through knowledge sharing, community, and access to information. Also, Facebook aims to empower developers to develop applications that can be used in programs like 'FbStart'. Each ‘Circle’, which can be applied to anyone who is interested in technology such as students, entrepreneurs, and coding learners, with a member who gets a role of ‘Lead’ takes charge of offline event planning and online community management. Facebook supports community organization and video materials for free to enable developers to share knowledge, and collaborate on a variety of technical topics by using discuss forums ▲ 'FbStart' is designed to help startups at the early stage build and grow their businesses. The cases of a domestic partner that had successful results on the Facebook platform are also shared. "Wanted," which is a recruitment platform based on acquaintance recommendations, shortened the sign-up and sign-in process and vitalized the platform through Facebook’s sign-in function. In addition, by securing more reliable profiles through interworking Facebook sign-in, more than 1,400 companies are using Wanted, and more than 100 new companies are joining each month. MangoPlate, a food search and recommendation service platform, was able to keep the new restaurant database up to date and automate the data entry process by applying the Facebook Places API and Facebook sign-in to its services. After introducing the function, 'MangoPlate' was able to acquire 30,000 new restaurants in just two weeks, adding 14 times as many new restaurants to the database. OP.GG, a game data analysis platform that is used by 27 million gamers around the world, simplifies the process of creating accounts through collaboration with Facebook and grasps gamers’ use pattern through analysis tools, statistics, and insights provided by Facebook. Retrica, a world-renowned camera app that has been downloaded over 350 million times, introduced Facebook’s authentication tool, ‘Account Kit’, in an effort to attract foreign visitors. In addition to lowering the cost of confirmation text every month, it was able to increase the success rate of signing-up through Facebook by 15% within three months of introduction. Meanwhile, Facebook will provide in-depth information on platform-based Facebook products such as native mobile apps through ‘Platform Meetup from Facebook’. It also introduces tips on how companies can build meaningful relationships with their customers on the platforms provided by Facebook. Moreover, there was a briefing on key updates from Facebook's annual developer conference, F8, which was held in April.
Chinese Great Firewall Tightens, While NordVPN Plans to Continue Operating in China
February 1, 2018. On January 30, China committed to forcing local and foreign companies - as well as individuals - to use only government-approved software to access the Internet. The intent is to block all international providers of VPNs (Virtual Private Networks). VPNs are widely used in China to access blocked sites, such as Google and Facebook. Also, many foreign and local companies use VPNs for cross-border communication. According to the Chinese government, VPNs “unlawfully conduct cross-border operational activities.” From now on, any foreign companies that want to conduct cross-border operations will need to set up a government-provided line or network. There are already reports of people having trouble to log on to WhatsApp and other communications apps that are used for business. In general, the government aims at gaining more control of the cross-border communication lines in China and despite claiming that the changes shall not affect privacy and security of the Internet users, the reality might be much different. “Inability to freely access the global Internet will gravely affect operations of foreign companies in China, as well as local companies that need to conduct unobstructed operations with the outside world,” said Marty P. Kamden, CMO of NordVPN and cybersecurity expert. “Using a government-approved VPN - which comes with constant monitoring of the Internet activity - loses the purpose of a VPN, which is supposed to provide unrestricted, encrypted and private Internet access. It’s very dangerous to limit any society of access to balanced and truthful information, as well as to take away business communications tools. It might backfire with loss of businesses and blows to economy, universities and schools. NordVPN plans to operate in China as much as possible and to work on ways to circumvent the Great Firewall.” Chinese telecom companies have been ordered to prevent their 1.3 billion subscribers from accessing the Internet via government-unapproved VPNs. However, history shows that Chinese people have always found ways to circumvent Internet blocks and censorship. “As long as China keeps the borders open, the economy grows, and the Chinese people travel abroad, they will not accept restrictions to their freedom to access the global Internet and to communicate freely,” said Marty P. Kamden. “The Chinese government should keep that in mind.” To find out more, please visit NordVPN.
Google AI Forum 6th Round: AI Innovation and Cloud
On the morning of September 12th, Google hosted the 'Google AI Forum 6th Round: AI Innovation and Cloud' at its office in Gangnam-gu, Seoul. In the upcoming AI-First era, Google AI Forum is an event monthly organized by Google to provide opportunities to a deeper understanding of artificial intelligence and machine learning, with intelligible explanations and examples. ▲ Google AI Forum 6th Round: AI Innovation and Cloud was held. As the first order of the event, Jang Hye-deok, director of Google Cloud Korea, gave an outline of Google Cloud business. Jang Hye-deok began the presentation by introducing that "Google is a company that works with a mission to gather and organize information around the world and make it accessible to everyone, anywhere, and it starts with a search business, but today, one billion people use services every day, including search, Android, Chrome, Maps, Play, YouTube, and Gmail." She explained that Google provides a huge scale of Internet services, but it has been collecting and indexing a variety of data and trying to provide more efficiently and understand users through AI. It's easy to see that YouTube is able to receives large videos and provides them to users all over the world after saving them, and Gmail is connected quickly from anywhere in the world and automatically backs up email so users do not lose their email. Likewise, many invisible tasks that provide convenience were possible because of the constant effort Google's engineers and computer scientists have been working on. Jang Hye-deok said, "As Google has been in business for more than 15 years, it has been through most of the sectors in the computer science field. The main task of Google Cloud is to package it so that external developers and corporate customers can use it depending on their needs." Four characteristics were mentioned as Google Cloud's strengths. According to the introduction, Google has a global infrastructure and network that connects data centers by installing undersea cables, although it is not a telecommunications company. Google's top-level engineers are in charge of helping customers reduce operational burden and focus more on securing insights. In addition, by the rate system which is tailored to the client's situation, up to 60% cost reduction compared to other cloud can be expected, and it also secures expandability through open source leadership about machine learning. Lastly, Jang Hye-deok said, "Because of the tight contact points around the globe, customers will be able to get a good user experience through Google Cloud, no matter where they are." ▲ Jang Hye-deok, CEO of Google Cloud Korea, introduced the outline of Google's cloud business. ▲ There are 7 services that 1 billion people use every day among Google's services. ▲ Four features about the strengths of Google cloud are mentioned. As the second order of the event, Jia Li, director of Google Cloud AI and ML R&D, gave a video lecture on AI innovation and cloud. Jia Li started the lecture by saying, "Although AI has evolved from academic research, now it is at the center of the greatest change in industry. Many examples show that a number of companies benefit from the efficiency of AI, and due to this effect, AI will be one of the exciting areas." According to the lecture, especially in the next stage of AI, 'AI democratization' should be done to reach the maximum number of people, and it will lower the entry barriers and give advantage to as many developers, users and companies as possible. Moreover, for everyone to get the most out of it, there is a need to pay attention to key elements such as computing, algorithms, data, talent and expertise. Google Cloud is taking advantage of computing power's GPU, CPU, and cloud TPU to cover the entire machine running. Among them, the cloud TPU, which was presented at this year's I/O event, is the second generation product of Tensor Processing Unit. The first generation could only be used to run a given machine learning model, while the training and running had to be done on separate hardware. However, the second-generation TPU can both train and run the machine learning model at the same time. Performance is a level of 180 trillion floating point operations in a sec per a single unit. Google is providing cloud TPU to its customers through Google Cloud Engine, enabling research institutes and companies, which utilize machine learning, to obtain new efficiencies and try more things in a short time. Next, Jia Li said, "Of course, computing power is essential and important, but it is only the first step for utilizing AI. Even if all the computing power available all over the world are secured, AI is still a very complex and challenging area, so companies need to have a variety of tools to use them." And, "Here, the tool may be a machine learning library such as a tensorflow, or it may utilize the model through a pre-trained API." To provide the pre-trained API, there is a need to organize the data for training, and one of the things Google has prepared for it is ImageNet. ImageNet has more than 150,000 object categories and more than 14 million images. It is recognized as one of the largest visual data sets, and the algorithm built on it has rapidly improved the state of computer vision. Thanks to this, the recognition error rate has drastically decreased, and this improvement not only allows developers to experience through the cloud vision APIs of various services, but also enables developers to utilize their own algorithms. These trained models are available as tensorflow based services and can be used in large scale machine learning projects. Since this service provides basic infrastructure and secures expandability, for customers who utilize this, it is convenient to focus on only bringing the best results using machine learning model. Jia Li emphasized the importance of data by explaining, "As human beings learn a lot from their lifetimes, they need to have huge amounts of data in AI to keep them up. Businesses need to learn how to collect, classify, and process meaningful data and implement meaningful projects properly." She also stated that Google shares many types of data sets, including genetic-related public data and YouTube data sets. Meanwhile, she introduced that Google is making great efforts to educate and invest in talent. Each year, more than 250 research projects from around the world are funded annually by Google, and it is also investing in manpower to provide scholarships for doctoral degrees and to train thousands of interns. Google's in-house training program encourages Google's engineers to grow their expertise in machine learning. By expanding it to external programs, companies can get AI-related training on Google sites and give them the opportunity to do real work with Google's machine learning specialists. By finishing the lecture, Jia Li said, "AI is one of the most important technologies of our century, and we will try to keep Google Cloud ahead of AI Cloud. The technology that transforms everyone to enjoy the benefits of expensive resources is the most meaningful technology for us, and will be remembered as the first step toward providing and democratizing AI." ▲ Director Jia Li gave a video lecture on AI innovation and Cloud. ▲ She said that there is a need to focus on key elements including computing, algorithms, data, talents and expertise. ▲ A variety of API are prepared for clients. In the last step, Lee Seung-bae, CTO of Ticket Monster (TMON), took time to introduce cases of Korea partners. According to the introduction, TMON now uses OCR technology among the Google Vision APIs to find words that should not be used in very small fonts in tens of thousands of product description images having thousands or tens of thousands of pixels in size. Moreover, the possibility of providing convenience services using speech APIs or natural language APIs is also under consideration. Lee Seung-bae said, "Each of the techniques that uses machine learning is not good enough in terms of accuracy, but it could be a great choice for quick results." ▲ Lee Seung-bae, CTO of Ticket Monster, took time to introduce cases of Korea partners.
LINE FRIENDS Globally Launches New 'BT21' Characters Inspired by K-Pop Boy Band BTS
BT21 characters, created by global character brand LINE FRIENDS in collaboration with K-Pop Boy Band BTS, are finally unveiled and are already a sensation. LINE FRIENDS revealed a total of eight new BT21 characters, inspired by each individual BTS member with characteristics including personality, values and taste, through its LINE STORE and official BT21 social media channels. Only 10 days after being released on September 26, the characters have already recorded more than 8 million downloads and 71 million Twitter exposures. Moreover, the official social media channels for BT21 including Twitter, Instagram, and YouTube have reached over 420,000 users, demonstrating its rapid growth in popularity. In fact, a number of fans from around the world have on their own created fan art pieces using BT21 characters, creating even more buzz. The BT21 line of characters is the first output from LINE FRIENDS's new 'FRIENDS CREATORS' project, which focuses on creating new Intellectual Property (IP) with global artists. 'FRIENDS CREATORS' is LINE FRIENDS's long-term unique and creative strategy to create a new kind of character IP business with the company's capabilities and assets in the character business, based on the creativity of global artists from different fields. The launch of the BT21 cast of characters is significant in that it marks the first time in the character industry that all of the members of a group have actively participated in the entire process of creating unique characters. BT21 members drew sketches and detailed the characteristics, preferences and values of each character in close collaboration with LINE FRIENDS, rather than simply creating characters with the physical appearance of the artists. Based on the great popularity of the BT21 characters, LINE FRIENDS will release a behind-the-scenes video featuring BTS creating their BT21 characters and introduce a wide variety of BT21 merchandise and games later. In particular, LINE FRIENDS will showcase its BT21 products at its LINE FRIENDS flagship store in New York and a famous Korean luxury boutique 'Boon The Shop' in Seoul this coming December. The products will be available in Japan, Taiwan, Hong Kong and Thailand as well. "BTS, the first artist to join the 'FRIENDS CREATORS' Project, clearly embodies the key LINE FRIENDS' philosophies - Global, Millennial and Trend. BTS is a modern storyteller who can share the story about the development of its characters and how they reflect each member's characteristics," said LINE FRIENDS. The company added, "LINE FRIENDS will continue to put its full efforts into collaborating with leading global artists to create new IP and content that will be loved by existing fans and even the younger generation in their teens and 20s. A total of 14 episodes of behind-the-scenes clips about the development of BT21 characters will be released on October 17 through the official BT21 YouTube channel and official website.
Google AI Forum 1st Round: AI Overview & Inside Google Translate Technology
On the morning of the February 9th, Google hosted 'Google AI Forum 1st Round: AI Overview & Inside Google Translation Technology' at Google Campus Seoul in Gangnam-gu, Seoul. 'Google AI Forum' is an event organized by Google to provide an opportunity to more deeply study artificial intelligence and machine learning in AI-First age with more clear descriptions and examples. This event will take place on a monthly basis. ▲ Google AI Forum 1st Round is opened. ▲ Park Yeong-chan introduced about the overview of AI and machine learning. ▲ A research scientist Mike Schuster introduced about technology of 'Google Neural Machine Translation.' The first step of the lecture was Park Yeong-chan, a Google Tech Leader and software engineer, who gave an overview of AI and machine learning. To begin with, artificial intelligence (AI) is a science technology that makes things smarter through a combination of various computer science technologies. Park Yeong-chan said that this is like a future dream that will take hundreds of years to come, while others talk about solving tiresome duties instead of humans. And its sub-concept, Machine Intelligence, is a machine-assisted technique for solving problems on specific topics. It is characterized by the fact that it is specialized on a small number of subjects, just as 'Alpha Go' acts only on ‘Go(Baduk)’. Machine Learning is a technique that allows a machine to train itself through examples instead of typing each of its operations into a single program. Many development methods of machine learning were developed for 3~40years. Through neural networks, which mimic the actual cranial nerves, millions or billions of neurons extend knowledge by transferring information entered by each of them to different neurons. In this process, the neurons form several layers and learn information that they transmit from each layer, and this is called Deep Learning. Through this process, the neuron network of the highest layer learns a very abstract pattern since the pattern input by each layer continues to be learned. On the other hand, there are currently three methods used for machine learning. First, Supervised Learning, which is the most commonly used method, is a way of repeating the learning process until the answer is settled when the data about the specific situation is entered. This method is highly accurate and is a good way to learn. Since it requires a lot of data, the better results will be obtained with the more data on a particular situation. Next, Unsupervised Learning is a way of getting answers by connecting similar things while continuing to check all data without any data or samples. This learning method is used mainly in the experimental part that humans do not yet understand, or in the area where it is difficult to obtain samples. Finally, Reinforcement Learning is slightly different from the above two learning methods. It is a way to learn by itself by providing results from repeating random acts without giving specific information. This method is the most difficult method and its actual use is small, but the research about it is the most advanced. Park Yeong-chan introduced that Google is currently using machine learning on Gmail Spam Search, Speech Recognition, Photo Search, Image Recognition and Automatic Translation, and explained about the reason that artificial intelligence is recently receiving a lot of attention while its research has been done for decades by saying, “It is because the research has progressed at a faster pace and the results of research conducted on machine learning have emerged through rapidly developing computing infrastructure, cheaper storage space and the emergence of new deep-running models.” Also, even if the same researches are done, those with more data, better research model, or higher computation are superior. So, AI technology is expected to grow at a faster pace than it has ever been in the near future and show results one by one. ▲ Machine intelligence and machine learning are the sub-concepts of artificial intelligence. ▲ Deep learning allows neurons to make multiple layers to learn information delivered from each layer. ▲ There are currently three major ways to use machine learning. ▲ Google is applying machine learning to a variety of services. In the second step, Mike Schuster, Google Research Scientist, introduced ‘Google Neural Machine Translation’ technology where AI technology is applied through video lecturing. Schuster explained about the reason why Google has been focusing its efforts on translation services that 50% of the content on the Internet is in English, but only 20% of the world's population can speak English. In other words, to make information more accessible and to resolve cross-country communications, translation needs to be improved, so Google is paying lots of attention to translation services. Today, Google translates over 140 billion words and more than one billion sentences are translated through this service. There are about 500 million people who actively use Google translation services on a monthly basis, covering 99% of all online users with 103 languages. In particular, 'Google Neural Machine Translation', which was released in September of 2016 and applied to 8 language combinations in November, is different from conventional phrase-based machine translation in that the sentence is divided into words and phrases. It translates the whole sentence at once, understand and rearrange the most appropriate translation according to the context, and provide translations that are close to natural sentences according to grammatical rules. It includes an end-to-end learning system that improves translation quality by learning from millions of cases, and it was confirmed that the translation quality of some language combinations applied for demonstration was improved through the introduction of ‘Google Neural Machine Translation’. When translating English sentences into Korean and then translating them into English, the accuracy of conventional phrase-based machine translation was not high. However, after using 'Google Neural Machine Translation,’ it was possible to get a translation result with a high degree of accuracy. In fact, when comparing improvements of translation quality, it showed high accuracy improvement in English translation from Korean, Turkish, or Chinese. Under its influence, the traffic of English-Korean translation in the Android environment has risen up to 50% in the past two months. At the same time, 'Google Neural Machine Translation' has adopted 'Zero Shot Translation' which enables translation between multiple languages in a single system, so that the translation quality has improved as well as enabling translation of a combination of multiple languages that are not tested through multi-language training. For example, the translation knowledge between English and Korean, English and Japanese has enabled the translation of a combination of Korean and Japanese that has not been trained. Moreover, the research about common languages that present sentences with the same meaning in similar ways regardless of language revealed that sentence data of similar meaning are gathered in cluster unit. As a result of this learning, Schuster said that the translation speed, which took an average of about 10 seconds per sentence, was reduced to an average of 0.2 seconds in about two months. On the other hand, he talked about the prospects that there is still room for improvement in 'Google Neural Machine Translation' in the future. He said, "Humans can easily translate numbers and dates that are not translated correctly through mechanical translations. Mechanical translations also sometimes mistranslate short and infrequently used sentences, and proper nouns such as names and brand names. However, you will be able to meet steadily developing ‘Google Neural Machine Translation’ as the research group of experts is working day and night to solve them." ▲ More than one billion sentences are translated daily through Google Translate. ▲ 'Google Neural Machine Translation' shows relatively more natural translation results than the existing translations ▲ 'Zero Shot Translation' allows translation of untested combinations of languages ▲ There is still a shortage in the future, and improvements will be made steadily.
The Fifth Anniversary Press Conference of Facebook Korea
Facebook Korea held a press conference celebrating the fifth anniversary of Korean branch at its office building on December 14, 2015. Facebook runned the event under the mission, ‘more opened and connect world’, with the theme, ‘connect, connect, and connect’ Yong-Beom Jo, Facebook Korea branch manager, explained “When Facebook is listed on the US stock market, the submission document starts with the sentence ‘Facebook is not created to be a company. It is started for social responsibility. The responsibility is for connecting the world, and realizing the world‘”, and delivered recent news. Now, Facebook has been setting Oculus’s VR technology, being invested under the expectation that it will be the next generation platform to enrich human’s lives in aspects of ‘internet.org’ project, started from the statement that internet accesss should be a right, and movie/education/meical and etc, artificial intelligence(AI), using data to consider how to enhance human’s communication or life, and etc as the three principal investment items. Apart from this, it is developing several products which provide benefits as a service at home and abroad. Next, Hyeon-Ho Son director introduced about branding and creative business strategy, Yeong-Jun Jo director introduced performance marketing platform business strategy, Hyeon-Seok Park director introduced agency partnership, and Gi-Yeong Kim director introduced business supports for small businesses. ▲ Facebook Korea Yong-Beom Jo branch manager ▲ Facebook Korea Hyeon-Ho Son director ▲ Facebook Korea Yong-Jun Jo director ▲ Facebook Korea Hyeon-Seok Part director ▲ Facebook Korea Gi-Yeong Kim director Q1. What do you think about the comment that the advertisement is excessive? User’s experience is the most important. Though a business goal is already set, Facebook Korea is also investigating if use experience gets better. By making use of this index, it is trying not to be like that. Q2. How is Instant Article partnership policy going? A media channel partnership is done through Facebook. It is in trial service with Subusu News. It is planned to extend to most of the press in the next year. The plans will be delivered to most of the press as soon as they are confirmed. Q3. What is the reason that the blind’s cases realated to AI are introduced? Generally, AI is mostly maximize the ability that humans have. The video played earlier provides data needed for recognizing things after it is collected. After studying puppy images a lot, make AI recognize them. This is because it becomes possible to be explained with audio. Computer, machine and etc maximize the ability, catch the meaning of factors of the image, and tell them with audio. Q4. It was an issue that the founder made domination. Does Korean branch also have any plan of social contribution for Korea? In this year, missing child warning campaign is representative. If a missing child occurs, noted warnings through Facebook. Facebook Korea is considering products that can be brought as a social enterprise in the next year. Q5. Please arrange the development for the last 5 years. How is the further outlook? Korean branch started with 4 people at the original office, but now increased a lot. The number of Korean users are steadily increasing. I think the number of Korean users will continuously increase. Though I talked about a story related to Korean market, the inside may not know the fact that living in Korea means living in the future. There is no any other country that smartphone users, mobile video consume, and etc are spread this many. Korea ranks top in video consume for digital. Looking at the trends, it is very future-oriented market. Q6. To talk about news platform, Naver is unrivaled, but recently Facebook has become popular. What is a competition strategy for vs. Naver in the next year? Also, I wonder a differentiation from Naver, and Koreanizing policy in the future. Our policy is not to answer the question of comparison to a competitior. Besides, we try localizing as best as we can, but emphasize a global standard as 1.5 billion people are using so that country-specific might disperse them. Localize only a few main things. We wonder between local and global, but its answer turns to a company’s mission. First do things as humanity work, and if the work is for a specific region, it will be pushed back on the priority list. Q7. A video advertisement is that Facebook Korea didn’t do until now. What is a plan about this? A video advertisement displayed visits at the platform itself since the end of the last year. Recently, Facebook Korea held a customer event focusing on the video called Facebook play. After that, a video advertisement is generally growing very fast. Apart from comparison to a specific platform, customers who prefer play to link are increasing. Products not released yet will come out one after another. The representative is 360 degree video, and some advertisers are launching it abroad. Q8. Will you extend a personal user upload? Please explain about startup supporting plan. In the past, a text message was main, and abroad was mainly a photo. As internet is fast, and people use smartphones, Korea goes to a video. Facebook Korea sees a 360 degree video the middle stage of a video and virtual reality. A virtual reality is experience with glass, but if provide this experience by cellphones, people will experience more sense of reality. In Facebook Korea, there is an organization kept by tightly attaching one to one. For startup which will be marketed globally, it directly attaches to consider a target with a country it wants. About the country wanting to newly inroad, Facebook Korea tightly manages services wondered before inroading, and provides them through branch manager in the country. Startup shows a successful case through Facebook. Also many inquiries are about the startup. In 2015, it progressed a marketing semiar with more than 200 businesses. Also progresses a marketing seminar cooperating with VC such as D.CAMP. Facebook Korea has some thoughts of increasing size in the next year. Increase VC partnerships, and finding a position to lead. Q9. In an aspect of e-commerce, which can be directly saled at Facebook, what is possibility of increasing business about? There is no such thing about it. Korean companies are continuously releasing products making positive results through e-commerce, and Facebook is supporting it. It will support to make results from mobile commerce part. For example, 2-3 years back, there were services of listening to music in Facebook. This was a service lauching by companies coming into a Facebook platform. Likewise, mainly works on making eco-system, and coming in to develop business. Q10. When talking about a targeting type advertisement, the concern is about precedents of misusing personal information. If a target becomes more accurate, there will be more using of users’ information. Facebook attaches great importance on personal privacy. Protecting information is the most significant part in business decisions. In users’ position, it is not bad that an appropriate advertisement comes when use something. If you don’t like the advertisement, it’s possible to express your intention at the right side. The concern about marketers making a proper message will bigger than the misuse. Not selling information to outside, for instance, Facebook provides grouped information such as people living in Seoul and people graduated from a specific college, not a person’s information such as Yong-Beom Jo. Facebook doesn’t let advertisers check executed information. They only can understand which group they executed. Q11. (Acrofan) As Facebook is internet serive, its service quality is fluid depending on the conditions of communication network. Yesterday, there was a trouble on a particular communication network line. Please introduce what a monitoring and countermeasure of this part. User experience is the most significant. Through continuously monitoring, Facebook is trying to improve environment with operators and cable operators. Though, the service improved a lot compared to 2-3 years back. What’s so interesting, in aspects of speed, Korea is very fast when compared to the US. However, in Korea, users feel it slow. Based on the US or global standard, it is excelled. But Facebok will still invest as user experience in Korea is important. For monitoring, installed a beta version on the employees’ smartphones, and they can report right away if shakes cellphones. This figure is also very high. If a problem occurs, it is active to employees shake and report. Problems such as a bug, wrong translation, and an image uploaded strangely are also reported thorugh shaking cellphones. ▲ As having instant article service open ahead, there were many participants than ever before.