Deep Learning’s Rise Leaves Japan Playing AI Catchup
Science Technology- English
- 日本語
- 简体字
- 繁體字
- Français
- Español
- العربية
- Русский
In March 2016 AlphaGo became the algorithm heard around the globe. In a best-of-five series in Seoul, South Korea, the go-playing program developed by Google’s DeepMind soundly defeated Lee Sedol, a leading international player, 4–1, effectively dethroning humanity in one of the few remaining arenas where it ruled over artificial intelligence. While the ground-breaking achievement has shown a spotlight on the rapid advances in machine learning, AI is actually a rather mature concept with a long history encompassing a broad variety of technologies.
The desire to build machines with humanlike intelligence began with the accomplishments of British mathematician Alan Turing, who during World War II laid the groundwork for digital computing by developing the first electromechanical calculating device. His code-breaking machine and seminal 1947 work Intelligent Machinery inspired scientists to consider the potentials of AI. The term artificial intelligence was coined in 1956 at the pivotal Dartmouth Artificial Intelligence Conference. Japanese researchers were not among the diverse experts in attendance at that influential gathering, and it was not until 1986, three decades later, that the Japanese Society for Artificial Intelligence was formed to study AI.
Wired for Learning
Turing’s notions of machine intelligence encompassed such broad notions as neural networks, something I call first-generation AI. However, the JSAI was created in response to a fresh wave of debate focusing specifically on signal and information processing. More than 90% of AI researchers—including those in Japan, where funding has steadily increased—are engaged in exploring these systems, which I term second-generation AI. This rapidly developing field has already produced such innovations as the World Wide Web and Google’s search engine.
One rapidly developing area of AI is machine learning. The idea of constructing algorithms able to analyze data and learn to make decisions and predictions based on results is not new. However, the field sat on the back burner of AI research for want of computers capable of making the necessary computations. Since the early 2000s, though, the emergence of increasingly powerful computers have enabled great strides in what is known as deep learning, making it possible to create large, multilayered artificial neural networks able to process vast amounts of data. I consider this to be third-generation AI.
In Need of Experts
While computers have improved, the lack of activity in machine learning during the 1990s has resulted in a grave shortage of experts in the field. The scarcity is particularly acute with deep learning and has forced tech giants like Alphabet (Google), Facebook, Microsoft, and Baidu to actively court and handsomely compensate suitably capable researchers.
Japan has especially felt the pinch of the diamondlike rarity of deep learning experts. The situation is so dire, in fact, that there is hardly a domestic university or research organization with a dedicated researcher on staff.
Efforts are underway to address this paucity and a rapidly growing number of institutes are moving to offer research programs in deep learning. The problems, however, is that many of the top professors influencing the direction of programs are themselves little better than novices in the field, having spent most of their careers studying second-generation AI. In addition, views among those specializing in machine learning and deep learning frequently vary, making it difficult for institutes to decide how to best direct resources.
As a result, Japanese researchers seem stuck in neutral while their counterparts in China, Europe, and the United States rush ahead in producing a steady string of breakthroughs. My interactions with academia have left me wondering whether the rigid thinking at Japanese universities is promoting a derisive view of deep learning, robbing it of the serious consideration it deserves.
As an illustration, I was asked by the Ministry of Economy, Trade, and Industry to help plan two sessions on deep learning for the annual JSAI symposium in June 2016. During discussions about possible titles for the meetings, one academic quipped that no one would consider attending sessions featuring the term “deep learning.” As astonishing a statement as this was, it showed me the lack of regard many Japanese researchers have for cutting edge AI.
A Giant Leap in Machine Translation
It is certainly true that deep learning represented only a small fraction of the topics covered at the symposium. One must also bear in mind that the JSAI gathering preceded Google releasing its revolutionary neural machine translation system.
Introduced in November 2016, the system foregoes standard models of natural language processing and instead uses an artificial neural network that over time learns from countless examples how to produce better translations. Although a relatively straightforward concept, the subsequent sharp improvement in the capabilities of Google Translate has left Japanese researchers gaping in awe.
In Japan the lion’s share of resources are directed toward studying natural language processing and machine translation. Researchers of second-generation AI have focused on unraveling the structure of intelligence with a view that it can be logically understood and recreated. As such, they have placed high value on analyzing the natural patterns of language as a means of unlocking the workings of the human mind.
The quality and accuracy of translations produced by the end-to-end approach of Google’s neural machine translation system, however, have turned this method on its head. Researchers in Japan have to accept the possibility that Google’s algorithm presents a revolutionary way forward or face the prospect of continuing down a dead-end road.
Overturning the Game Board
A similar ground-breaking dynamic can be seen behind DeepMind’s approach in developing AlphaGo.
DeepMind developed the program to learn via two deep neural networks: a value network that evaluates the state of the game and a policy network that guides what actions to take. It then boosted the playing ability of the algorithm by having it study records of past matches and pitting the two networks against one other, a method it calls deep reinforcement learning.
Prior to AlphaGo, designers of go-playing AI took a fairly straightforward approach, arming programs with the rules and tactics and focusing on devising ways of winning against professionals. While significant progress was made, no AI capable of matching the wily skills of leading human players emerged.
Like a shot out of the blue, though, the relatively unknown DeepMind swooped in to claim the holy grail of computer go. Since then, leading IT companies in Japan and China like Dwango and Baidu, along with amateur programmers, have been emboldened to test their algorithms at various AI go competitions. On the other hand, Japan’s University of Electro-Communications shuttered its UEC Cup Computer Go tournament in the wake of AlphaGo’s victory, hosting the decade-old event for the final time in March 2017.
DeepMind has not grown indolent in its success but is maneuvering to take the lead globally with a string of newly developed technologies. Domestically, companies such as Yahoo Japan and Dwango, along with emerging firms like Preferred Networks, are pouring resources into deep learning. Meanwhile, major manufacturers including Toyota and FANUC are continuing to invest in AI. This is heartening to see, and I hope their efforts will net real dividends in the near future.
(Originally published in Japanese on March 30, 2017. Banner photo: South Korean professional go player Lee Sedol faces off against DeepMind’s AlphaGo in March 2016. The AI program won the competition 4–1. © AP/Alfo.)