Zoom Elevates Platform Experience with Launch of Zoom Apps and Zoom Events
Lucidworks Joins Google Cloud Partner Advantage Program and Launches Fusion, AI-Powered Search Platform, Through Google Cloud Marketplace
Zoom Announces Zoom Events Platform for Virtual Experiences
Artificial intelligence monitoring : “must-have” in an online world
Hyperconnect, Key to social platform success: reflecting the ‘user voice’
Telestream Launches Enhanced DIVA Digital Asset Management Software
G.hn certification reaches record levels as the HomeGrid Forum warns about deploying uncertified products
Wi-Fi Urgently Needs More Spectrum
QNAP Launches High-speed Dual Band Wi-Fi 6 PCIe card for NAS and PC
Primestream Joins SRT Alliance to Work Toward High-Quality, Low-Latency Internet Video Streaming for Remote Workflows
UKWISPA showcases Terragraph from Facebook as first three global vendors prepare Fixed Wireless Access for Gigabit Britain
The UK Wireless Internet Service Providers Association (UKWISPA) hosted senior executives from Facebook, Siklu, Radwin and Cambium for the first time as they gear up for the release of Terragraph multi-gigabit technology in the UK. Terragraph, from Facebook’s Connectivity arm, promises low cost broadband delivery at speeds of over 1 gigabit, with the potential for over 10 gigabits using license-exempt Fixed Wireless Access (FWA) in the UK from later this year. UKWISPA members Radwin, Cambium and Siklu are all set to ship products this year in time to help facilitate the UK Government’s Gigabit Britain pledge. “Terragraph is set to revolutionise the cost and speed of roll-out for gigabit-plus broadband here in the UK,” stated David Burns, chairman of the UK’s body responsible for wireless internet services, UKWISPA. “At less than £200 per premises passed and under £900 per premises connected, with no disruptive duct digging, Terragraph equipment can bring multi-gigabit connections to the masses - quickly,” continued Burns. Facebook has been developing this technology for around five years and has built pilot systems in several countries around the world using prototype equipment. Now, Facebook has attracted a range of global equipment manufacturers to form a whole ecosystem around the technology. Significantly, global silicon manufacturer, Qualcomm has developed a high volume, low cost chipset (802.11AY-based) to enable the extremely high data speeds needed to facilitate Terragraph affordably. This has enabled a range of equipment vendors, including UKWISPA Technology Members Cambium Networks, Radwin and Siklu, to release products in 2020, and has encouraged others, such as MikroTik and IgniteNet, to commit to joining the community. “Facebook recognised that new applications require high speed connectivity and, with data consumption growing at an ever-increasing rate, the demand for broadband cannot be matched by the current ability to build new high-speed networks. With Terragraph, Facebook is creating an ecosystem to address this gap and serve under-connected communities. We helped assemble a technology stack with a range of partners, assisted with spectrum advocacy and the specification of 802.11AY standards, and built an industry ecosystem to realise the potential for this technology.” stated Neeraj Bhatia, Product Manager from Facebook Connectivity. “Terragraph networks are built on very high-speed resilient mesh equipment, where a small low power device is mounted on a building or street furniture and communicates with up to 16 other units on other buildings to form a mesh. This method perfectly complements fibre build out, where the Terragraph mesh fills gaps that would be expensive to install otherwise. As a mesh, data can pass in all directions at full speed, meaning upload and download speeds are symmetric and the mesh can tolerate breaks without stopping. Moreover, it is so fast that it can seamlessly blend with fibre to create a fully hybrid network that suits the local conditions,” added Burns. “UKWISPA members are itching to install Terragraph services to help more customers across the country and to upgrade their existing customers to gigabit speeds and complement their fibre plans,” concludes Burns.
Yamaha UC Helps Connect Assisted Living Facility Residents to Loved Ones
Yamaha Unified Communications is helping the residents of Cornerstone at Milford Assisted Living & Compass Memory Support in Milford, Massachusetts, stay connected during the coronavirus and beyond. The company donated a CS-700 Video Sound Bar™ to the facility as way for residents who struggle to see and hear calls on smartphones and tablets to have a more comfortable and enjoyable audio and video experience. "Staying connected while apart is vital to mental health, especially for those who are living in retirement and care communities and aren't able to see their loved ones now," said Meghan Kennelly, Yamaha UC's director of global marketing and communications. "While mobile devices are convenient, they don't always provide sufficient volume or a large enough picture to see clearly. I saw how much my grandmother and others at Cornerstone were struggling with mobile devices, and Yamaha immediately wanted to help. The CS-700 was built around the idea that conversations — whether they happen in a meeting room or in a care facility — need to be clear and stress-free to be effective and enjoyable. We love hearing all the stories of the residents who now look forward to this time with their families." After Cornerstone had to close its doors to visitors due to COVID-19, the senior community naturally turned to FaceTime, Zoom, and Skype calls via iPhones and iPads to connect residents with their families and loved ones. However, many of the residents had to hold the device close to their face in order to clearly hear the conversation and see their families. As a result, their families couldn't see or hear them well. Kennelly, whose 93-year-old grandmother is a resident of the Milford assisted living facility, reached out to the facility to see if Yamaha could help by donating a CS-700 all-in-one collaboration system. "Our residents are really enjoying seeing and hearing their loved ones more clearly on the big screen," said Michelle Hamilton, director of community relations, at Cornerstone. "We are so appreciative of Yamaha's generosity to help enhance their virtual visits with their friends and families." Perfectly suited for quickly adding natural collaboration amenities to nursing, retirement, and care facilities, the Yamaha CS-700 is designed to deliver the highest quality audio, video, and collaboration capabilities. The simple and smart wall-mounted unit is easy to install, deploy, and use. It features an adaptive beamforming microphone array for perfectly captured conversation; four Yamaha speaker elements to provide the highest degree of audio intelligibility; and a wide-angle HD camera for the far-end participants to see everyone in clear detail. Building managers simply connect the system to a display and any UC platform via a single USB. The CS-700 is also Zoom certified, which simplifies the call experience even further by automatically detecting the CS-700, enabling mute sync, and enhancing the audio in the Zoom cloud so background noise doesn't interfere with calls.
QNAP Launches High-performance quad-core 1.7 GHz NAS for Reliable Home and Personal Cloud Storage
QNAP® Systems, Inc., a leading computing, networking and storage solution innovator, today launched the quad-core TS-x31K series NAS (including 1-bay, 2-bay and 4-bay models) that provides centralized data backup and management, easy file access and sharing, feature-rich multimedia applications and secure Snapshots protection. Featuring a compact pure white minimalist design, the TS-x31K blends in with any home décor and takes up very little space, making it an ideal solution for home users to build reliable private cloud storage. The TS-x31K series is powered by a quad-core 1.7 GHz processor for exceptional home performance. With 1 GB RAM, Gigabit LAN (1-bay: one GbE port; 2-bay and 4-bay: two GbE ports), SATA 6 Gb/s, and AES-256 bit encryption, the TS-x31K delivers fast and stable connectivity. Featuring tool-less and lockable drive bays, the TS-x31K makes installation easier while also ensuring the drives are safe and secure. “The Quad-core TS-x31K series streamlines home storage and multimedia applications, allowing users to enjoy the convenience and enjoyment of a personal cloud. Users can easily access, manage and share files using an intuitive user interface, while also easily accessing files remotely by using dedicated mobile apps,” said Jason Hsu, Product Manager of QNAP. The TS-x31K series is a comprehensive home data center that provides well-rounded data storage, sharing, backup, synchronization, and data protection. Users can regularly back up data from their Windows® and macOS® computers and from mobile devices, then further protect their backup data by saving to another NAS or cloud storages as an off-site copy using HBS (Hybrid Backup Sync). Users can enable snapshot protection to effectively mitigate the threat of ransomware and to quickly restore files to previously-recorded states. The TS-x31K provides a wide range of multimedia applications, including Photo Station, Video Station and Music Station, allowing users to easily manage and view their rich media collections. Users can also transform the TS-x31K into a Plex® Media Server. More useful functions include: use Surveillance Station to build a secure surveillance system; Qsync can automatically sync files between NAS, mobile phones and computers. Users can also easily remotely access the TS-x31K using dedicated mobile apps and the myQNAPcloud service. Key Specifications TS-131K: Tower model; 1-bay, Annapurna Labs AL-214 quad-core 1.7 GHz processor, 1GB RAM; hot-swappable 3.5-inch SATA 6Gbps bays; 1 x GbE port, 3 x USB 3.2 Gen 1 ports TS-231K: Tower model; 2-bay, Annapurna Labs AL-214 quad-core 1.7 GHz processor, 1GB RAM; hot-swappable 3.5-inch SATA 6Gbps bays; 2 x GbE ports, 3 x USB 3.2 Gen 1 ports TS-431K: Tower model; 4-bay, Annapurna Labs AL-214 quad-core 1.7 GHz processor, 1GB RAM; hot-swappable 3.5-inch SATA 6Gbps bays; 2 x GbE ports, 3 x USB 3.2 Gen 1 ports
Harmonic Providing Free Access to Video Streaming Innovation During Global Lockdown
The current global health crisis has given rise to rapid growth in network traffic, resulting in strain across global broadband networks. In order to address the recent surge and to support its many customers during these unprecedented times, Harmonic (NASDAQ: HLIT) today announced it will provide its EyeQ™ content-aware encoding (CAE) technology free for the next 90 days to help alleviate the current network constraints. Harmonic's EyeQ technology leverages artificial intelligence to reduce streaming congestion on broadband networks by up to 50% without impacting quality. It serves as a highly effective tool to combat the recent surge and a step above recently utilized methods, such as resolution and bit rate reductions. EyeQ CAE is available on all Harmonic media processors, through a software license, and the VOS®360 Live Streaming Platform. "Due to the nearly global lockdown, the demand for high-quality streaming services has exploded and as such has led to excessive constraints on broadband networks. Adding our EyeQ technology to the mix can greatly decrease bandwidth consumption for high-quality video and alleviate strain on local and national broadband networks," said Shahar Bar, senior vice president, video products and corporate development at Harmonic. "By offering free licenses for the expected lockdown timeframes, Harmonic is providing an effective tool to help both media companies and broadband networks alike." Harmonic will highlight its EyeQ CAE technology through Live Connection, a 30-day virtual showcase highlighting Harmonic's latest video streaming innovations. The online interactive series will shine light on the powerful benefits of Harmonic's software solutions and cloud-based platforms providing increased agility, flexibility, efficiency and continuity to meet the growing consumer demand for streaming and broadcast services.
Infinet Wireless Delivers High Definition CCTV Network for Busy British Town Center
Infinet Wireless, the global leader in fixed broadband wireless connectivity, has provided Ipswich town, in the United Kingdom, with a network overhaul of its CCTV system to enable high-definition (HD) imaging and video. Ipswich Borough Council (IBC), responsible for governing the town which is home to 140,000 people, required an upgrade to its CCTV platform to meet modern surveillance demands for HD recording and monitoring the town centre’s daily activities accurately and dynamically. Infinet Wireless’ InfiMAN 2x2 Point-to-Multipoint (P2MP) and InfiLINK 2x2 Point-to-Point (P2P) solutions were selected to deliver, via a fixed broadband wireless platform, a much-improved video-surveillance network which would significantly reduce operational costs, whilst delivering HD real-time video streams and images. Dozens of Infinet Wireless’ industry-leading base stations and remote subscriber terminals have been deployed so far across the town to ensure seamless connectivity to all remote sites. ““The previous video-surveillance infrastructure we had in place throughout the town was outdated and we were merely coping with our mission-critical needs. After consultations with various parties, we quickly realised that we needed a complete overhaul of the existing network. As a local authority, keeping costs down whilst making sure a high quality of service and reliability is achieved, is the balance we were looking to achieve,” said Debbie Clements, Emergency Service Centre Manager at Ipswich Borough Council. “Infinet Wireless’ solutions were an excellent choice as they were not only very quick to deploy, but were affordable and easy to manage. The safety of our local community and visitors to the area is one of IBC’s seven key priorities, so by upgrading our network we are ensuring we can monitor and indeed keep them all safe around the clock. We are already seeing huge benefits from the high reliability of the new platform and the overall video quality.” Infinet Wireless’ solutions were selected for their ease-of-deployment and proven scalability to meet future requirements, such as increasing bandwidth demands, and future additional applications to be introduced by IBC. The InfiMAN 2x2 solution was selected thanks to its affordability, its reliability and its full support of critical real-time video transmissions. The company’s InfiLINK 2x2 family of solutions was selected for all backhauling needs. “Once the project has been fully deployed, the council will benefit from a massively improved video-surveillance platform which meets their financial and operational requirements, without compromising on quality or reliability, and one which is able to deliver 24/7 monitoring of the entire town centre, ultimately improving the overall safety aspects in the town,” said Kamal Mokrani, Global Vice President at Infinet Wireless. “It is vital for councils of all sizes to keep pace with rapid advances in video-surveillance technologies. At Infinet Wireless’, we are able to provide councils and law enforcement agencies with state-of-the-art wireless infrastructures to help them deliver on all their objectives both for today’s needs and well into the future.” The contract for delivery of the upgraded platform was awarded to the system integrators Videcom Security. Videcom has worked closely with Purdicom, Infinet Wireless’ strategic partner in the UK, to design and implement the best fit solution to meet the Council’s exact requirements. Adrian Wheeler, Account manager at Purdicom, said: “We realised with this project that the customer wanted great quality, yet the performance requirements had to exceed all expectations too. Implementing Infinet’s solutions made these aspects a reality.”
ProLabs to advance 400G network infrastructure with launch of high-density optical transceiver at OFC 2020
ProLabs, a global leader in optical networking and connectivity solutions, today expands next generation 400G network capabilities with the launch of its new transceiver solutions to address rising network capacity demands. Increasing 5G traffic is placing pressure on network operators to upgrade their current infrastructure. To address these challenges and meet the capacity demands, both now and in the future, ProLabs latest transceiver - the QSFP28-DD 2x 100G - enables operators to increase port-density, solve interoperability issues between current and future infrastructure and minimize infrastructure investments. “For network operators to excel in a competitive market, it is imperative to deliver high-quality, high-capacity network connectivity in line with growing customer expectations,” said Patrick Beard, Chief Technology Officer at ProLabs. “Doing so, requires investment in next generation 400G infrastructure whilst also keeping costs to a minimum to protect the bottom line. Our latest transceiver solution allows networks to increase capacity while reducing upgrade costs to provide flexibility for the future.” As data centers and network operators move to 400G to offer higher data rates, significant interoperability issues have risen with current network infrastructures - forcing entire systems to be replaced at a huge cost. The new ProLabs QSFP-DD 2x 100G transceiver utilizes two Non-Return to Zero (NRZ) connections and is compatible with many existing transceivers, offering large scale operators the ability to invest while minimizing cost. Beard added: “We are delighted to be launching the QSFP-DD 2x 100G at OFC 2020. The solution will provide more operators with a transceiver-based option to combat the challenges of interoperability. It will make a significant difference to avoid the replacement of entire systems and the costs that come with it.” ProLabs QSFP28-DD 2x 100G transceivers utilize the new high-density CS® connector to hand off two 100G NRZ connections to the network and are interoperable with existing 100G-CWDM4, 100G-LR4, and 100G-4WDM10 transceivers. 2x 100G transceivers offer large scale operators the ability to invest in the relief of network bottlenecks, reduce overhead expense and provide flexibility for the future.
NGMN Alliance and ESOA Members Collaborate to Extend Rural Connectivity With Non-terrestrial Networks
The Next Generation Mobile Networks (NGMN) Alliance in collaboration with members of the EMEA Satellite Operators Association (ESOA) have progressed the development of Non-Terrestrial Networks (NTNs) as 3GPP enhances 5G to support non-terrestrial access in their Release 17 work programme. Providing a convincing case for the implementation of NTN technology, the NGMN Alliance worked with key ESOA members to successfully demonstrate to 3GPP that space-based networks provide an effective alternative for network connection beyond traditional deployment methods, especially in rural areas. “It was a great achievement to make an impact on 3GPP’s decision to include NTN in the Release 17 work programme through the NGMN Alliance NTN Positon Paper, which demonstrates technological integration between terrestrial and non-terrestrial networks to significantly progress the extension of network coverage” said Sebastien Jeux (Orange), lead of NGMN Project “Extreme Long Range Communication for Deep Rural Coverage”. The paper highlights the requirement for mobile network operators (MNOs) to integrate space-based systems into their networks. “We are proud to demonstrate the potential of the integration of terrestrial and NTNs to provide internet and mobile broadband services to users in harder to reach areas such as coastlines, forests, deserts and mountains." said Dr. Peter Meissner, CEO of the NGMN Alliance. “By 2025, we envision the full deployment of NTNs to meet the challenges of mobile network operators and vertical industries in terms of reachability, availability and resilience which will make a significant difference to the extension of 5G connectivity.” Integrating space-based systems with existing terrestrial networks enables mobile network operators to overcome the challenge of signal quality and roaming capabilities in underserved areas. In the 5G world, terrestrial and non-terrestrial networks will further complement each other including utilizing integrated 5G direct satellite access to conventional smartphones in order to deliver superior coverage to users. Several use cases were also identified for new satellite-based services in the absence of conventional cellular coverage. These go beyond rural broadband and vehicular connectivity to include geostationary orbit satellite (GEO) fixed Internet of Things (IoT) direct connectivity, which will aid farming, sensing, asset tracking and oil and gas vertical markets. Ultimately, the development will facilitate the movement towards advanced public safety and smart cities. The paper further assesses reliability and efficiency with regards to the feasibility of service transmission between user equipment (Class 3 UE, Very Small Aperture Terminal (VSAT) UE and IoT devices) and NTN platforms such as satellites - both GEO and non-geostationary (NGSO) - and high-altitude platforms (HAPs), concluding that NTNs can provide direct mobile broadband access. NGMN will continue deeper NTN analysis based on the requirements of mobile network operators expressed in this White Paper, jointly with all industry stakeholders.
Neural Technologies, “Rating reconciliation is critical to maximize 5G services and avoid revenue leakages”
To manage the growing number of 5G end user services, Communications Service Providers (CSPs) must consider ‘real-time’ digital rating and charging processes to fully capitalize on new 5G revenue streams and avoid revenue leakages, says Paul Cox, Business Development Manager at Neural Technologies, at RAG Delhi Conference, 5 – 6 February 2020. With the emergence of 5G, customers are demanding increasing levels of personalized end user services, reflecting in many new revenue streams for CSPs. Speaking at the RAG Delhi Conference, Cox explains how to manage increasing volumes of complex end user services for effective management of charging and billing processes. “To truly capitalize on the many opportunities of 5G use cases, CSPs must adapt with the latest digital transformation technologies to create a ‘real-time’ experience of service delivery and payment to foster long-term customer loyalty,” said Paul Cox at Neural Technologies. “To achieve this, CSPs need to create a near real-time convergent rating and charging strategy which can match service demand using the latest Artificial Intelligence (AI) and Machine Learning (ML) technology.” Detecting revenue leakages is key to maximize on the potential of new service revenue streams. With volumes of customer transactions, Neural Technologies’ Rating Reconciliation solution supports the management of tens of thousands of data records per second which is critical in order to process and charge customers accurately for the services they opt into. Neural Technologies’ Rating supports legacy and next generation environments, any type of device and any type of service and payment method to fully support the management of volumes of data records. The Optimus Charging solution, based on the Optimus Platform, provides a fully configurable solution covering all processes involved with the charging requirements. Including configurable input formats, convergent mediation, complex tariff plans to accurately charge the customers services. This is critical to support growing end user usage and execute efficient, accurate charges to foster customer loyalty. Paul Cox at Neural Technologies addressed the challenges of revenue management with Optimus Revenue Assurance Rating Reconciliation during his presentation at RAG Delhi Conference 2020 on the 5th February.
NetFoundry’s Cloud Gateway Now Available at Digital Ocean Marketplace
CHARLOTTE, NC –September 5, 2019 – Solution providers can now spin up programmable, zero trust networking across the Internet via NetFoundry’s 1-click application on the DigitalOcean Marketplace. DigitalOcean’s Marketplace is a platform that connects developers with easy-to-use, partner-built solutions to simplify and accelerate app development, deployment, and scaling. Independent Software Vendors (ISVs), SaaS providers, Solution Integrators (SIs) and Managed Service Providers (MSPs)whom host their apps in DigitalOcean use NetFoundry to enable private network, zero trust connections between their apps and their customers, without requiring their customers to configure and manage VPNs, which are both expensive to operate and often damage the performance of the application. Rather than nailing up 20 VPNs to 20 customers, the ISV uses NetFoundry’s APIs, SDKs or web console to centrally and programmatically manage access across all tenants in a least privileged access paradigm. “Digital Ocean and NetFoundry share a goal to enable developer innovation, and the combination of our services enables developers, ISVs, SIs and MSPs to enjoy a new art of the possible in which they leverage private application connectivity without requiring their end customers to nail up VPNs, private circuits and custom hardware.” said Galeal Zino, CEO of NetFoundry. “As developers and small- and mid-sized businesses turn to modern apps to power their latest projects, we want to help make app creation easier from start to finish. By building upon DigitalOcean’s Developer Cloud to simplify infrastructure, NetFoundry lets developers instantly connect distributed applications securely in any cloud or device in just one click," says Nick Wade, Head of DigitalOcean Ecosystem & Marketplace. NetFoundry’s global Fabric is accessible from any Internet connection via SDK or software endpoints, and functions as a zero trust Internet overlay with optimized performance. NetFoundry manages the Fabric as a service (Nework-as-a-Service), while developers control the Fabric via API, SDK or web console, often simply using NetFoundry’s APIs in their DevOps and cloud orchestration tools such as Jenkins and Ansible (Connectivity-as-Code). “NetFoundry offers a better way for developers to securely scale and deliver apps,” said Greg Shields, Director of Strategic Partner Alliances for NetFoundry. “While legacy networking is fine for legacy use cases, it was not built to effectively deploy applications across hybrid topologies. Trying to use legacy WAN for distributed apps blocks innovation and business benefits due to a variety of issues. Issues such as long implementation timelines, difficulty in scaling, proprietary hardware, complex and error-prone architectures, compromised performance, increased attack surfaces, incompatibility with DevOps, and potentially higher than expected costs are all common.”
InfiNet Wireless to showcase high performing mining solutions at Mining Week Kazakhstan
InfiNet Wireless, the global leader in fixed wireless broadband connectivity, will showcase its ground-breaking wireless solutions for the mining industry at Mining Week Kazakhstan 2019. InfiNet Wireless will present its Mobile Video Complex and Vector 5 solutions at the event, also known as the 15th International Mining Exhibition for Mining and Exploration, Mineral and Coal Processing and Metallurgical Technologies. Taking place in the city of Karaganda, Kazakhstan from 25-27 June, the event will unite mining experts with a focus on the latest solutions for advancing connectivity within mines. “We are looking forward to sharing our cutting-edge solutions with the mining industry at this leading event. The Mobile Video Complex solution offers mining companies with priceless benefits, including optimization of maintenance costs and operation in harsh climatic conditions and aggressive environments,” said Roman Smirnov, Chief Commercial Officer at InfiNet Wireless. “The scalable and flexible system, which can be easily deployed avoiding high costs, has a communication channel range of up to 100km and a communication distance with fixed objects of up to 6km at a vehicle speed of 90km.” InfiNet Wireless’ Mobile Video Complex solution is a specifically crafted solution for the mining industry which provides organization of the main communication channels for mining companies. It also consolidates all geographically distributed objects of the enterprise in a single information network and operates a CCTV system. Furthermore, the Mobile Video Complex system allows seamless communication with mobile objects and machinery including cars, excavators, drilling machines, railway transport and loading and unloading equipment. The Mobile Video Complex is compatible with InfiNet Wireless’ InfiLINK 2x2 PRO, InfiLINK 2x2 and InfiLINK XG wireless point-to-point solutions and its InfiMAN 2x2 point-to-mulitpoint solution. The solution has already been successfully deployed by of the largest thermal coal producers in Kazakhstan, one of the top five mining enterprises in Russia and one of the largest gold mining companies worldwide. InfiNet Wireless will also display its Vector 5 point-to-point solution which operates at 5 GHz and boasts a capacity of up to 450 Mbps in only 40 MHz channel, quality Full service (QoS) and processing power in excess of 800,000 packets per second. Vector 5 is specifically designed to offer the highest spectral efficiency available in the current wireless market and is capable of operating in temperatures from below zero to tropical conditions. Vector 5 offers a wide range of uses at a high-performance level, such as Internet access, multiservice networks, telemetry and high-resolution video transmissions in a CCTV infrastructure, making efficient use of the spectrum currently available in the market. InfiNet Wireless’ booth will be located at stand number 339 at the Multilogic Sport Complex Zhastar.
Launch of the digital service EVE: automated live captions with artificial intelligence
Filmgsindl invented EVE, as they experienced the difficulties of live captions during customer events. The objective was not only to reduce the oversized costs related to travel and external expenses of interpreters, stenographs and hardware, but to find a better digital solution as the quality of live captions related to humans often shows limitations. Thus, EVE does not only help support organizations or companies like MICROSOFT to hit Accessibility Standards and lower costs, EVE also is an additional medium. The digital service captures every spoken word and shares a transcript (PDF version) directly after the speech for further actions, like articles, event film subtitles and SEO. That content makes events, speeches and video-libraries completely searchable and can improve reputation and image, related to the digital footprint. Nowadays everybody posts every thought on twitter and shares pictures on instagram. But now it is the time to digitalize the spoken word. In order to guarantee the quality of the text output, it is possible to use one or more online correctors. These editors can improve the quality even more, as it is possible to correct the text live, from anywhere. EVE learns constantly through machine learning. The basic language model is optimized constantly, and its results improve accordingly. EVE also memorizes the corrections and individual dictionaries can be uploaded to teach EVE vernacular. Thomas Papadhimas: “It is 2019, and thus long overdue to offer a digital service which automatically generates live captions of videos, events and lectures, etc. The service is easy to use with common platforms and devices, independent from OS and cost efficient. Globally many people rely on captions, but subtitles are rare. EVE will change that and make the world a better place, as inclusion is not negotiable” So far EVE works in English and German but the live machine translation for other languages is already availble in a BETA Version. An updated feature appears on the roadmap which improves recognition based on the learnings of the human corrector as well. More details will be shared soon.
KISO Released the Results of Its Survey on User Awareness of Naver’s Search Word Service
On February 19, the Naver Search Word Verification Committee of the Korea Internet Self-governance Organization (KISO) held a press conference for releasing the results of the survey on user awareness of Naver’s search word service at the Conference House Dalgaebi, located in Jung-gu, Seoul. The Naver Search Word Verification Committee, which has been verifying the appropriateness and validity of Naver's search word service since 2012, tried to apply consistent standards and principles in its verification process, but there have been cases in which value judgments and opinions are opposed sharply within the committee or the limits of drawing realistic standards are encountered. Therefore, the necessity of user awareness arose to find out the public sentiment except for the experts’ opinions and what the public’s commonsense criterion is. The survey was conducted by the investigation company Macromill Embrain on 2,000 men and women from 19 years old to 60 years old. The survey questions were composed of a) usage and evaluation of search word service and b) opinions of search word service policy. ▲ The press conference on user awareness survey of Naver’s search word service was held. First, as for the results of the usage of search word service, users use the whole search word service about '2 to 5 times' on average per day, and it does not show any difference according to age, sex, region and political tendency. However, heavy users who use ‘more than 10 times’ a day were mostly ‘20s and 30s’. In the case of the real-time search, the result shows that users look for the ranking of the search words to check recent topics or issues, and then click on ‘1 to 3 search words (45.7%)’ to obtain relevant information. On the other hand, the overall reliability was not as high as satisfaction with the service. The survey according to the age group shows that the satisfaction and reliability of the 50s were the highest, and the reliability of the 20s was low. As for the evaluation by each search word service, 'Autocomplete Search> Related Search> Real-time Search' appeared in the order of all evaluation items (degree of use, usefulness, convenience, necessity, intention to use). Next, the section on opinions of search word service policy was divided into 1) judgment about deletion of search words, 2) portal provider’s intervention in search word service, and 3) opinions on portal provider’s policy of search word service. 1) Judgment about deletion of search words Among the search words that were exposed as main ads, there were some differences in opinions according to the nature of the public service event and the commercial nature, but more than half answered 'Leave it as it is.' In order to protect children and adolescents, 87.1% of respondents answered that they should delete the adult video title, and 13% of the respondents said that they should leave it. In addition, for copyright protection, when the title or episode of popular content is uploaded to the real-time search, respondents answered it should be deleted (58%) because the illegal download method etc. appear in the search result. In case of accidents like suicide and sexual violence that could become national concerns, the answers on whether to expose local names and school names as search words showed high percentage of ‘Leave it as it is’. In detail, 5% higher response rate was shown in ‘exposure of school names’. Moreover, about 80% of respondents said that the names of hospitals where medical accidents occurred or the names of companies or products under investigation should be exposed as search words. There were many opinions on the names of celebrities and specific body parts should be deleted because of defamation and insult. On the other hand, there was a high percentage of the opinion that the search words for celebrities' romance rumors, accidents complicities, crime-related cases should be left as they were. The interesting point related to the search words for celebrities' search words is that the opinions of the search words of the celebrities who separated long ago should be 'deleted (63%)'. In this regard, the verification committee said, “The judgement on deleting search words from the public is more reasonable than the experts expected.” In the case of the real-time searches that became controversy after anonymously reported on media, the answer of ‘Leave it as it is’ showed high percentage for the troubled religious people, restaurants, and celebrities. On the other hand, as for the names of issued celebrities at entertainment shows, the difference between ‘It should be deleted (42.5%)’and ‘Leave it as it is (57.5%)’ was not significant. In addition, in the case of antisocial and illegal criminality search words in which a specific person’s name is mentioned with search words like ‘garbage, suicide, communist, etc.’ the answer of ‘It should be deleted’ was remarkably high. The search words related to Sewol Ferry were judged different depending on the case. In case of disgusting expressions of Sewol Ferry, 78.8% of the respondents answered that they should be deleted, but in case of the search word ‘7-hour theory of Sewol Ferry’, 61.4% answered ‘Leave it as it is’. As for the search word ‘human sacrifice theory of Sewol Ferry’, 62.8% answered ‘It should be deleted’. 2) Portal provider’s intervention in search word service According to the survey on whether portal providers should intervene in defamation search words, the results were different depending on jobs. Only the case of the ordinary citizen (57.6%) showed high percentage of the answer ‘need intervention’, and the responses on ‘need intervention’ in the other categories showed as follows: independent creator (39.6%)> celebrities (36.9%)> businessman (27.6%)> politicians (26.6%)> high-ranking officials (26.5%). Meanwhile, in the section on intervention on the deletion of the real-time search related to the protection of users, the intervention on 'youth protection (80.8%)' showed the highest percentage. As for portal provider’s intervention in deleting search words for protection of privacy and personal information, the answers for high-ranking officials (34.1%), politicians (35.1%), and businessman (37.2%) showed low percentage of ‘need intervention’, and the answers for celebrities (49.0%) and independent creator (52.2%) showed similar percentages of ‘need intervention’ and ‘nonintervention’. On the other hand, the opinions to ‘intervene’ for ordinary citizens’ privacy were high with 69.8%. 3) Opinions on portal provider’s policy of search word service As a result of the survey on the portal provider's search word service policy on a scale of one to five, the answer of “portal provider has responsibility to manage the search word service” got the highest score with 3.99, and that of “portal provider should manage fairly” got the lowest score with 2.66. As with previous reliability surveys, it is shown that users have low reliability in the management of search words of portal providers. In this regard, the verification committee said, “Because that giving a feeling of intentional engagement to users is the thing that portal providers fear the most, they always sugarcoat by saying ‘There is no manipulation’, which means that there is no bad intention to manage it. However, because users think that they do not touch search words at all, reliability has been lowered due to the differences in understanding of the management of search words between users and portals. If a portal provider transparently presents the criteria for managing search words, it will help users to improve their reliability.” For the most appropriate method for the fair management of the search word service, 'Management according to the principles of self-responsibility of portal providers (37.7%)' got the highest percentage. For discontinuation of portal providers’ search word service, the answers mostly showed negative position: ‘It should be continued’ got 63.7%, and ‘It should be discontinued’ got 7.5%. Moreover, respondents answered that a portal provider should ‘reveal (79.0%)’ management principles of search word service or standards of search word deletion. The answers of ‘I don’t know’ got 10.4% and ‘No need to reveal’ got 10.4%. Lastly, when asked about the necessity of external institutional verification of Naver's search word service, 87.1% answered that 'external verification is necessary'. After releasing the results of the survey, Naver Search Word Verification Committee presented political proposals. “First, the main goal of the policy of the search word service should be aligned with 'protection of users and promotion of interests'. Second, it is necessary to continue the process of gathering opinions of various interest groups in the policy of the search word service. Third, the principle of 'nonintervention' in the policy of the search word service needs to be reconsidered. Fourth, it is urgent to improve the reliability of the search word service.”
NCsoft AI Media Talk
NCsoft held an AI media talk event at its R&D center in Seongnam, Gyeonggi-do on March 15th, 2018. The event was organized to introduce NCsoft's current status and vision of AI research and development. The officials including Un-hee Han, Chief of Media Intelligence TF, Jae-jun Lee, Director of AI Center, and Jung-seon Jang, Chief of NLP Center, from NCsoft attended the event. “NCsoft has started AI research in 2011 and organized an event today to introduce what it has done so far. I would like to inform of the researches on AI that NCsoft has been preparing for a long time.” – said Un-hee Han, Chief of NCsoft’s Media Intelligence TF ▲ AI media talk of NCsoft was held. According to the presentation, NCsoft is currently researching AI with the main focuses on AI Center (Artificial Intelligence Center) and NLP Center (Natural Language Processing Center). The AI Center and NLP Center, which have a total of more than 100 researchers, are operating five organizations under the direct control of CEO Taek-jin Kim. The AI Center is composed of Game AI Lab, Speech Lab, and Vision TF, and the NLP Center is studying technology areas through Language AI Lab and Knowledge AI Lab. In the Game AI Lab, the researches on AI technologies for game development and services such as game playing AI, AI for game planning, AI for game art development, based on reinforcement learning, deep learning and simulation technology, are being conducted. By applying AI to ‘Infinite Tower’ content of ‘Blade & Soul’, the environment where users can duel with AI was created. Recently, through deep reinforcement learning technology, which is the combination of the existing reinforcement technology and deep learning, the performance of AI is improved and a combat AI that gives a similar feeling to people by using combat logs of users are being developed. In the Speech Lab, the researches on voice, speaker, and emotion recognition technology that recognizes the language, speaker, and emotion information contained in the voice signal, and voice synthesis technology that converts a text into human voice such as natural dialogue and emotional voice are being conducted. The lad is also studying how to use this technology in the process of game development and play. In the Vision TF, the researches on images and video, such as AI recognizing images or video, or creating images using generative adversarial network (GAN) technology, are being conducted. This is the case that an AI automatically assigns tag information to graphic resources, performs coloring (sketch auto-coloring), and automatically generates the necessary images. ▲ Game AI Lab applied AI to ‘Infinite Tower’ content of 'Blade & Soul'. In the Language AI Lab, the researches on various application technologies to exchange information in human languages, such as Q&A, dialogue, document summary, and story creating technology as well as natural language processing technology, are being conducted. Beyond simply asking questions and letting AI answer, this lab tries to make AI grasp the significant parts of the text and summarize it. In the Knowledge AI Lab, the researches on the technologies that guesses, generates and delivers new knowledge from meaningful knowledge, which is extracted and saved from various data such as text, are being conducted. Meanwhile, NCsoft plans to expand and strengthen its investments in the fostering and R&D of AI researchers. To this end, NCsoft is actively recruiting talented individuals. The AI Center and the NLP Center are working closely with 12 research laboratories in the domestic AI field such as Seoul National University and KAIST. In recent years, Hae-chang Lim, the former professor of Korea University Department of Computer, who is Korea's top authority in the field of natural language processing, has joined NLP center as an advisory professor. At the same time, the firm will continue to share the status of such researches not only within the firm but also outside including the academic community. Actually, it had held ‘NCSOFT AI DAY 2018’ on February 22nd and 23rd and shared the current status of the research development with about 200 employees of NCsoft and 100 people including domestic graduate school professors in industrial cooperation and master and doctoral students.
Google AI Forum 10th Round: AI Innovation and Computational Photography
On February 28, 2018, Google hosted ‘Google AI Forum 10th Round: AI Innovation and Computational Photography’ at its office in Gangnam-gu, Seoul. At the conference, Marc Levoy, Distinguished Engineer of Google, gave a video lecture on how AI technology is integrated into a photograph that can record our everyday lives and memory. ▲ ‘Google AI Forum 10th Round: AI Innovation and Computational Photography’ was held. According to the announcement, Google presented a ‘portrait mode’ that combines machine learning and computational photography technique with a new Pixel smartphone. This 'portrait mode' automatically applies a soft out-focus effect to the background so that the person can be highlighted. This helps the camera focus more on the subject than on the cluttered background and allows the photographer to take more artistic pictures. This 'portrait mode' greatly improves the photographs through four steps. By using each process efficiently through AI, it shows better results to users. The first step is to create an HDR+ image through a photo shoot. HDR+ is Google's computational photography technique to improve the quality of photographed pictures. The way to prevent losing highlights, HDR+ captures several under-exposed images, aligns, averages and merges the frames of the captured images to reduce noise in the shadows. As a method of reducing global contrast while preserving local contrast, it also amplifies these shadows to get pictures with high dynamic range, low noise, and sharp details even in dim lighting. The idea of aligning frames to reduce noise has been known for decades, but Google introduced that its implementation is a bit different as it is achieved on a photo by handheld camera. ▲ HDR + is Google's computational photography technique to improve the quality of photographed photos. The second stage is 'machine learning-based foreground-background segmentation'. Generally, which pixels belong to the foreground, typically a person, and which belong to the background are decided. Here, there is a tricky problem, because the background cannot be assumed as a certain color such as green or blue, unlike the chroma keying (a.k.a. green screening) in the movie industry. Instead, Google applied machine learning. Google has trained the process of estimating which pixels are human and which pixels are not human by using Convolutional Neural Network (CNN) written in TensorFlow. 'Convolution' means that the learned components of the network are organized in the form of filters (sum of weights of neighboring pixels around each pixel), so you can think of networks as simply filtering images, then filtering the filtered images. The ‘skip connections’ allows information to easily flow from the early stages in the network where it reasons about color and edges up to later stages of the network where it reasons about high-level features (faces and body parts). Combining stages like this is important when you need to not just determine if a photo has a person in it, but to identify exactly which pixels belong to that person. CNN was trained on almost a million pictures of peoples with hats, sunglasses, and ice cream cones. The third stage goes through ‘calculation of depth using stereo algorithm’. The Pixel 2 doesn't have dual cameras, but it does have a technology called Phase-Detect Auto-Focus (PDAF) pixels, sometimes called dual-pixel autofocus (DPAF). It works in a way of splitting every pixel on the image sensor chip into two smaller side-by-side pixels and reading them from the chip separately. Unlike many cameras including DSLR that use PDAF technology to focus faster on video recordings, it uses to compute depth maps. PDAF pixels give you views through the left and right sides of the lens in a single snapshot and uses left-side and right-side images (or top and bottom) as input data to a stereo algorithm like that used in Google’s Jump system panorama stitcher. This algorithm first performs subpixel-accurate tile-based alignment to produce a low-resolution depth map, then interpolates it to high resolution using a bilateral solver. ▲ It goes through ‘calculation of depth using stereo algorithm’. Lastly, the fourth stage ‘puts all together to render the final image’. This step is to combine the segmentation mask computed in the second step with the depth map computed in the third step to decide how much to blur each pixel in the HDR+ picture from the first step. The rough idea is that the subject considered to be a person stays sharp, and the subject considered to be the background is blurred in proportion to how far it is from the in-focus plane, where these distances are taken from the depth map. The application of blur is replaced with translucent disks with various sizes. By compositing all these disks in depth order, it is possible to get the approximation to real optical blur. On the other hand, engineer Marc Levoy presented tips for shooting a nice portrait. First, close enough to the subject that head fills the frame, and for group shots, the subject must be placed at the same distance from the camera. Also, for the blur effect, you should put a little distance between the subject and the background, and you should take off dark sunglasses, a wide brim hat, and a big scarf. In addition, when taking close-ups, the focus should be adjusted so that the subject of interest remains sharp. After the lecture, Marc Levoy said, “It is true that mobile phones are yet impossible to completely replace professional cameras due to technical and mechanical limitations, but it is possible to show users a certain level of photos. This is important for widening the user's choices, and the machine learning and computational photography technique are at the center.”
Shutterstock Held Press Conference for Strategy Announcement
Shutterstock held a press conference at InterContinental Seoul COEX, Gangnam-gu, Seoul, on the morning of December 8th, 2017. The event was organized by Shutterstock, currently participating in Seoul Design Festival, to introduce its strategy and was attended by Yvonne Januschka, Asia Pacific Sales Director. Yvonne Januschka, Asia Pacific Sales Director of Shutterstock, said, “Shutterstock has been adding digital media such as high quality photos and illustrations since its establishment in 2003 and is strengthening its position as a creative platform based on the latest technology. Korea is a very important market for Shutterstock, and we are working hard to make a better business relationship. Thank you for your interest in Shutterstock.” ▲ Shutterstock held a press conference for strategy announcement. ▲ Yvonne Januschka said, "Since Korea is a very important market, we are working hard for a good relationship." According to the introduction, founded in 2003 by Jon Oringer in New York, Shutterstock provides high-quality licensed photos, vectors, illustrations, icons, videos and music to professionals in corporate, marketing agency and media around the world. Shutterstock also provides a separate business solution for workflow enhancements of business and agency through Shutterstock Premier Platform. Currently, it has more than 160 million images and 8 million videos from more than 300,000 contributors and an average of 150,000 new images are added every day. Approximately 1.7 million customers are using Shutterstock in more than 150 countries around the world, and the number of downloads has reached 500 million so far, equivalent to 5.5 images downloaded per second. Korea is in the top 5 of Shutterstock's Asian market and about 1,000 participants are working at Shutterstock. Recently, Shutterstock's mobile app has been updated to make it easier for domestic users to browse and download images whenever and wherever they need them. In partnership with Shutterstock, Shutterstock provides opportunities for participants to develop through space and activity support where they can profit from their talents, and participants enrich Shutterstock's library. In the meantime, Korean illustrator Kim Yeon-hee was selected as a representative of Asia in 2017. ▲ Shutterstock is a creative platform based on digital media. ▲ Korean illustrator Kim Yeon-hee was selected as a representative of Asia in 2017. Shutterstock continues to evolve by introducing various innovative technologies without stopping providing images and videos. First, it introduced a search based on its own developed convolution neural network technology. In addition to searching for images by keyword, 'reverse image search' function allows users to search similar images of the look and feel by analyzing the algorithm of their images. Next, by introducing a new watermark generation function, it protects the participants' assets from the algorithm that Google revealed and watermark removal method using computer vision technology. Through API integration with Adobe Photoshop and Microsoft PowerPoint, users can also use Shutterstock photos and illustrations directly within each application. This allows creative professionals to design faster and smarter. In addition, beginning in 2013, it began supporting Facebook's basic ad creation platform through collaboration with Facebook, and this allowed advertisers to add professional images to the ads they post on Facebook at no additional cost. By partnering with Shutterstock Premiere, National Geographic has made it easier to find the right images for video themes and improved its workflow. BBDO, a global advertising agency, has also partnered with Shutterstock to deliver high quality content and boost the level of advertising campaigns. Moreover, through Shutterstock and API integration in 2016, Google has begun using Shutterstock images for its advertising platform, making it easier for users to use images that match their digital advertising messages. As for how Shutterstock will develop its activities in the Korean market, Yvonne Januschka said, “Shutterstock participated in the Seoul Design Festival and shared many exchanges with Korean designers and marketers. We are delighted to introduce Shutterstock's innovative technology to support these activities, and we will continue to introduce a variety of innovative features for Korean companies, marketers, designers, entrepreneurs and participants.”
Google AI Forum 8th Round: AI Innovation and Natural Language Processing
On December 5, 2017, Google hosted 'Google AI Forum' on the theme of 'AI Innovation and Natural Language Processing' at the conference room of Google Korea in Gangnam-gu, Seoul. At this forum, Google introduced ways and examples of how to improve the user experience through natural language processing using machine learning. Google has been conducting research on natural language processing (NLP) for a long time, focusing on the development of algorithms that can be applied directly to a variety of languages and domains. This system is used in a variety of ways across Google products and services, helping to improve the user experience. Google is dealing with overall traditional natural language processing tasks, also showing a strong interest in algorithms that work efficiently in highly scalable and distributed environments including universal syntax and semantic algorithms that support more specialized systems. Google's syntactic system predicts the morphological features of each word in a given sentence, such as tags of speech parts, gender, singular, plural, etc., and classifies them into subject, object, modifier, etc. Google is also focusing on efficient algorithms that use large amounts of unclassified data, and has recently introduced neural network technology. Google, on the other hand, has recently focused on improving text analysis by incorporating knowledge and information from a variety of sources, or applying frame semantics at the noun phrase, sentence and document level. ▲ Director Hadar Shemtov from Google Research Team Hadar Shemtov, Director of Google Research Team, pointed out “mobile” as the driving force behind a change in the user environment, and said that more than half of today’s queries are being generated in the mobile environment. He introduced that, as a result, the search result requires an immediate "answer" rather than a "link," and the movement of interaction to “interactive” is noticeable. Recently, the core works of Google are to recognize input values made by speech, convert them into text, understand them, and output the result in a voice form. The feature of the voice form query is that it is a longer form and is close to natural language. Sequential queries, which consist of conversational forms and refer to the elements of the previous question, have also been introduced as an important feature of voice form queries. Also, while the voice response technology corresponding to these queries is changing, the answer needs to be shorter and fluent at the user level. Likewise, Google has been focusing on two NLP elements: a way to take a long sentence and process it in short sentences, and a way to get high quality voice synthesis. In order to present an “answer” that focuses on the answer, it is necessary to reconstruct the long question with the natural form into proper form in a short and effective way. At this point, Google searches through related documents to search for answers from long questions, and then down to paragraphs and sentences related to the answers in the document. And simply put out the relevant answers. As a result, since additional searches are being made in the document, it can be seen as "search in search". The NLP system defines grammatical relations and groups between words in a sentence. What is important at this point is how to find the core of the sentence containing the desired answer simply. So, Google has grouped various words through the process, and then a single node value that is most likely to fit the context is figured out by using statistical processing through several examples and cases. In addition, through model construction that applied machine learning, it is possible to get the correct answer in grammatical terms as well as maintaining the essence of the sentence. Moreover, in the method of reducing the sentence, there is a need to decide whether to keep or discard each word in the sentence. By classifying all the words in the sentence and modeling signature values and examples of several sentences, the sequence-to-sequence value using LSTM can be checked. Consequently, a simple sentence with only the core will be produced by eliminating unnecessary parts. In this way, NLP system can summarize the sentences through the operations and derive simple, accurate values that include only the core. ▲ WaveNET technology, multiple layers between input and output, improves quality by combining multiple elements. In Google Assistance, the quality of voice output is very important as the assistance only uses voice-based interface. However, existing voice and text synthesis techniques used a method in which syllables are recorded separately, and then classified and re-combined when necessary, resulting in limitations in terms of quality. However, WaveNET, a probability-based new voice synthesis technology introduced by Google, uses digitized speech samples to acquire waveform information of speech, construct models and learn based on them. Then, the new text is applied with modeling, and finally high-quality results will be produced. In respect of voice, WaveNET technology recognizes the linguistic characteristics after vocalization and textization based on the waveform information, and proceeds with the voice synthesis process through the constructed model. Then, based on this model, when a new text is given, it is combined with modeling and the existing linguistic characteristics to grasp the new phonetic form and produce new voice. In addition, this algorithm has several layers between various input data and output data, and various factors are combined together to improve the quality of the result. He emphasized that although voice processing is a fairly expensive operation and cost, he did his calculations and was able to achieve a higher level of quality than traditional voice synthesis techniques. Moreover, in terms of "waveform", which is a morphological feature in the analog domain, by digitalizing it and charting sound wave through per-ms prediction method, it became possible to produce a sound output similar to the actual voice. ▲ Choe Hyunjeong, Lead of Google Computational Linguistics Team (NLU) According to Choe Hyunjeong from Google Computational Linguistics Team, Google is making a lot of effort in internationalization, introducing an assistant in about 15 countries, although the devices presented for each country are different. Google also introduced an assistant available in Android in Korea. In addition, in order to quickly launch an assistant in many countries, 'scalability' is important to make it easier to expand into more languages, including building a solid system and taking full advantage of data-based machine learning. About the process of globalization of assistant, Google enhances the entire language system by implementing the basic NLP system in English primarily and expanding to other languages after defining and designing functions to be implemented. Most of the systems that make up the assistant are using machine learning, and recently, the deep learning of the neural network model is also being used. In the model which is difficult to solve by the conventional rule-based machine learning such as voice synthesis, recognition, conversation model construction, etc. For both machine learning and deep learning, data is important for learning, and high quality data collected for the purpose is essential. Moreover, since Google Assistant is a conversational model, there is a need to consider more points in data. The aspect changes depending on whether it is a conversation between human and human, or human and machine. The data also shows a different pattern for the domain, such as the difference between spoken and written words, search words, news and blog data. It is also mentioned that paralleling data in multiple languages is necessary for extensions to various languages. ▲ 'Implicit Mention Detector', which can make the omitted part fit to the context Korean is one of the most difficult languages for data acquisition and modeling. In the case of English, the conversation between human and machine is not much different from the conversation between human and human, but Korean is different. In Korean dialogue, subjects and predicates are frequently omitted, and the difficulty of understanding the context is high. On top of that, there are various honorific expressions. Along with this, honorific forms are also diverse and complex, and there are subtleties of spacing and rhyme. Therefore, it is very difficult to understand and model these points in terms of machinery. So, Google is solving these difficulties with a knowledge-based model. Google introduced that it uses the machine learning-based 'Implicit Mention Detector' for omitting sentence elements that are common in Korean conversation, recognizing omitted parts in the sentence, and constructing it as a complete sentence. The system finds and displays all predicates and restores implicitly hidden pronouns. In this case, all the subjects are came out as restored state, and all the words referring to an individual are grouped by using 'Co-Reference' model. Through this, a number of omitted subjects or object words are restored and are being trained. In addition, when understanding human language, Google uses 'Query Matcher' for various expressions to understand similar meanings. It uses deep learning to understand various language systems by converting input values to vector values, grasping similar meanings through calculating distance at vector values, and finally grouping them in a single group. In addition to this, for the implementation of rhyme, Google is developing a model that can understood and implemented in a proper form in the modeling of phrases and rhyme.
Platform Meetup from Facebook Media Briefing Session
On the morning of November 3, 2017, Facebook held a media briefing session of ‘Platform Meetup from Facebook’ at El Tower in Seocho-gu, Seoul. Hosting a global workshop 'Platform Meetup from Facebook' event to showcase ways to maximize Facebook platform activities for domestic developers and start-ups, Facebook prepared the event for introducing features of the event and major cases by meeting medias with Christine Chia, general manager of APAC Platform Partnership on Facebook, in attendance. Christine Chia, general manager of APAC Platform Partnership on Facebook, said, “Facebook platform is the best tool for developers and founders to help them successfully reach their business goals and reach global markets. We promise to provide generous support for start-up companies to develop not only Korea but also overseas markets. ▲ Christine Chia said, “We will provide generous support for start-up companies through Facebook platform” According to the announcement, Facebook continues to provide support and opportunities for various partners such as developers, students, and companies to grow their business using the Facebook platform through ‘Facebook Platform Partnership’. Especially in Asia, including Korea, it establishes various tools for developers and start-ups to grow and focuses on developing programs and products that can support developer communities. As programs of ‘Facebook Platform Partnership’, ‘FbStart’ and ‘Developer Circles’ are running. First, 'FbStart' is a Facebook global program designed to help early stage startups build and grow their businesses. This program support through 3-step-process; “tools” that provides developers with tools and services they need for free, “support” that provides direct and exclusive mentoring with Facebook’s technical support coordinator who managed the start-up and succeeded as an entrepreneur, and “community” that provides opportunities to be connected with surroundings and co-workers through learning. Currently, about 6,000 start-ups in more than 130 countries are forming a global community through 'FbStart' program. Also, in Korea, Facebook launched ‘FbStart Seoul’ program in 2015 to support new mobile app start-ups with Facebook’s free development tools and mentoring throughout app planning and production process. ‘Bootstrap’ program for start-up companies and ‘Accelerate’ program for finding development after identifying initial business value are running. Facebook is also providing free services to outstanding companies, which offers Facebook advertisement and PAS advertising credit, product test, recruitment, customer management, video conference and document management. Next, 'Developer Circles' is a networking and growth program for developer communities, providing a global network of regional developer communities that use discussion forums on Facebook developer tools and services and share knowledge. This program helps them discover a more comprehensive developer community and promotes positive emotions through knowledge sharing, community, and access to information. Also, Facebook aims to empower developers to develop applications that can be used in programs like 'FbStart'. Each ‘Circle’, which can be applied to anyone who is interested in technology such as students, entrepreneurs, and coding learners, with a member who gets a role of ‘Lead’ takes charge of offline event planning and online community management. Facebook supports community organization and video materials for free to enable developers to share knowledge, and collaborate on a variety of technical topics by using discuss forums ▲ 'FbStart' is designed to help startups at the early stage build and grow their businesses. The cases of a domestic partner that had successful results on the Facebook platform are also shared. "Wanted," which is a recruitment platform based on acquaintance recommendations, shortened the sign-up and sign-in process and vitalized the platform through Facebook’s sign-in function. In addition, by securing more reliable profiles through interworking Facebook sign-in, more than 1,400 companies are using Wanted, and more than 100 new companies are joining each month. MangoPlate, a food search and recommendation service platform, was able to keep the new restaurant database up to date and automate the data entry process by applying the Facebook Places API and Facebook sign-in to its services. After introducing the function, 'MangoPlate' was able to acquire 30,000 new restaurants in just two weeks, adding 14 times as many new restaurants to the database. OP.GG, a game data analysis platform that is used by 27 million gamers around the world, simplifies the process of creating accounts through collaboration with Facebook and grasps gamers’ use pattern through analysis tools, statistics, and insights provided by Facebook. Retrica, a world-renowned camera app that has been downloaded over 350 million times, introduced Facebook’s authentication tool, ‘Account Kit’, in an effort to attract foreign visitors. In addition to lowering the cost of confirmation text every month, it was able to increase the success rate of signing-up through Facebook by 15% within three months of introduction. Meanwhile, Facebook will provide in-depth information on platform-based Facebook products such as native mobile apps through ‘Platform Meetup from Facebook’. It also introduces tips on how companies can build meaningful relationships with their customers on the platforms provided by Facebook. Moreover, there was a briefing on key updates from Facebook's annual developer conference, F8, which was held in April.