One of my first forays into the world of scientific research was working with Bret Beheim and Richard McElreath on an analysis of the prevalence and role of social learning in the ancient Chinese game of Go. So it has been enjoyable to watch to media blizzard over the occurence of a once-unthinkable: a computer beat the World Go Champion! Although chess has been conquered by computers for many years, it was thought that the substantially larger number of potential game states in Go (played on a 19x19 board) would allow for human experience and intuition to win out over brute-force approaches of computers. That proved not to be the case when the World Go Champion, Lee Sedol of South Korea, lost the best-of-three series against Google's AlphaGo in three straight losses.
As Richard has noted, perhaps one of the results from our paper about human Go masters applies to AlphaGo as well. We found that Go professionals attend to the popularity of a move as well as to its recent successes. AlphaGo was trained on a database of Go games, making it perhaps the most reliant of all on social learning.
And interestingly enough, our results indicate that Lee Sedol heavily relies on social, rather than individual, learning.
Together, this seems to indicate that neither Lee Sedol nor AlphaGo would be well-predicted by their past moves, and therefore would be harder to prepare against. Anecdotally, this contrasts with how Deep Blue was developed to play World Chess Champion Garry Kasparov. Deep Blue's creators actually programmed the computer to attack Kasparov's weaknesses, but when Kasparov played outside of his usual repertoire, he was able to stymie Deep Blue. It seems that Lee Sedol has discovered a similar weakness in AlphaGo, as he was able to win the fourth game in the series.