Computational Intelligence Meets

Abstract

Digital Object Identifier 10.1109/MCI.2012.2215493 Since 2008, National University of Tainan (NUTN) in Taiwan and other academic organizations have hosted or organized several human vs. computer Go-related events [1, 2, 3, 4, 5] in Taiwan and in IEEE CIS flag conferences, including FUZZ-IEEE 2009, IEEE WCCI 2010, IEEE SSCI 2011, and FUZZ-IEEE 2011. ChunHsun Chou (9P), Ping-Chiang Chou (5P), Joanne Missingham (6P), ShangRong Tsai (6D), Sheng-Shu Chang (6D), and Shi-Jim Yen (6D) were invited to attend the Human vs. Computer Go Competition @ IEEE WCCI 2012 (http://oase.nutn.edu.tw/wcci2012/ and http://top.twman.org/wcci2012) held in Brisbane, Australia, in June 2012. Seven computer Go programs, including MoGo/MoGoTW (France, Netherlands, and Taiwan), Many Faces of Go (USA), Zen (Japan), Erica (Taiwan), Fuego (Canada), Pachi (Czech Republic and France), and Coldmilk (Taiwan), challenged the humans in the competition. In addition to observing how many advances have been made in artificial intelligence, the competition also observed physiological measurements for testing cognitive science on the game of Go. The topic is “the Most Strategic Game” because Go is the deepest known game for the classical “depth” criterion. The planned games in the competition include: 1) 7x7 small board games to see if computers can also outperform humans when the games’ conditions are slightly in favor of humans, 2) 9x9, 13x13, and 19x19 board games to see how far computers are from humans now, and 3) novel activities for physiological measurements to see if physiological signals are also impacted by various conditions. It is now known that Go is specific in the sense that brain areas involved in playing Go are not exactly the same as those involved in chess, in particular, more spatial reasoning, mental verbalization, and motor control. The design of games was to investigate the current level of strong programs on various board sizes, but also to monitor the human brain and to check the player strength assessment capabilities of Go programs. Australian Broadcasting Corporation (ABC, http://www.youtube. com/watch?v=KhGvzaMFNAI) also reported this event by the topic of “AI expo underway in Brisbane” on June 15, 2012. The games’ results held in the IEEE WCCI 2012 are briefly listed as follows: ❏ 7x7: We assume that the fair komi in 7x7 Go is 9, which is the usual belief on 7x7. In June 2011, MoGoTW won 20 games against 10 professional Go players with komi in favor of the computer, and each human played as Black and as White [6]. Komi was 9.5 when the computer was White, and 8.5 when the computer was Black. This was done in order to check the ability of the computer to reach optimal play in 7x7. In spite of a mistake in one game (followed by a mistake by the human), the computer had a perfect record of 20 wins out of 20 games [6]. For the competition held in IEEE WCCI 2012, we played both easy setting (Komi in favor of the computer, i.e. 9.5 when computer is White and 8.5 when computer is Black) and difficult setting (Komi in favor of the computer, i.e. 8.5 when computer is White and 9.5 when computer is Black). The game results show that MoGoTW won all games which were “easy” for it; and won 50% of “hard” games against three 6D players. MoGoTW also won 1 out of 6 games in the hard setting against professional players. However, Joanne Missingham (6P) said that the reason she lost the game is that she is not familiar with 7x7 game, and this lost game is her first 7x7 game. MoGoTW has outperformed humans in 7x7 Go, making no mistake whereas humans, including one 6P player, did mistakes. ❏ 9x9: 9x9 is the favorite format of computers, which now win games routinely against professional players. Komi is often 7.5 in the past competitions. But komi is 7 this time so the game result may be a draw; computers must be able to deal with the draw situation. From the six 9x9 games’ results, we know that 1) surprisingly, only amateurs lost games, 2) in fact, humans could perform so well in spite of playing blindfolded for two of these games, and 3) even strong bots like Pachi or Many Faces of Go can lose even games to professional players. Hence, humans are still strong in front of computers in 9x9 Go with komi 7. ❏ 13x13: 13x13 is a nice platform for Go, as it is a less immediate fight than 9x9 Go and it is less time-consuming than the standard 19x19 Go. The eleven 13x13 games’ results show that humans won all games, including the four kill-all Go games. In June 2011 [5], computers won four 13x13 games out of 8 against professional players with H2 and 3.5 komi (i.e. handicap Computational Intelligence Meets Game of Go @ IEEE WCCI 2012

Cite this paper

@inproceedings{Chou2012ComputationalIM, title={Computational Intelligence Meets}, author={Hsun Chou and Ping-Chiang Chou and Rong-Kung Tsai and S Chang}, year={2012} }