## Archive for the ‘Games’ Category

### Writing a Halite Bot in Clojure [Incomplete Knowledge/Deception Bots?]

Tuesday, December 6th, 2016

From the post:

Halite is a new AI programming competition that was recently released by Two Sigma and Cornell Tech. It was designed and implemented by two interns at Two Sigma and was run as the annual internal summer programming competition.

While the rules are relatively simple, it proved to be a surprisingly deep challenge. It’s played on a 2D grid and a typical game looks like this:

Each turn, all players simultaneously issue movement commands to each of their pieces:

1. Move to an adjacent location and capture it if you are stronger than what’s currently there.
2. Stay put and build strength based on the production value of your current location.

When two players’ pieces are adjacent to each other, they automatically fight. A much more detailed description is available on the Halite Game Rules page.

Looking at the rules page, it looks like:

• Bots have accurate knowledge of all positions and values.
• Deception of bots isn’t permitted.
• Interesting from a bot programming perspective but lack of knowledge and the ever present danger of deception are integral parts of human games.

Any bot games that feature both a lack of knowledge and/or deception?

### Geek Jeopardy – Display Random Man Page

Tuesday, November 22nd, 2016

While writing up Julia Evans’ Things to learn about Linux, I thought it would be cool to display random man pages.

Which resulted in this one-liner in an executable file (man-random, invoke ./man-random):

man $(ls /usr/share/man/man* | shuf -n1 | cut -d. -f1) As written, it displays a random page from the directories man1 – man8. If you replace /man* with /man1/, you will only get results for man1 (the usual default). All of which made me think of Geek Jeopardy! Can you name this commands from their first paragraph descriptions? (omit their names) • remove sections from each line of files • pattern scanning and processing language • stream editor for filtering and transforming text • generate random permutations • filter reverse line feeds from input • dump files in octal and other formats Looks easy now, but after a few glasses of holiday cheer? With spectators? Ready to try another man page section? Enjoy! Solution: • cut: remove sections from each line of files • awk: pattern scanning and processing language • sed: stream editor for filtering and transforming text • shuf: generate random permutations • col: filter reverse line feeds from input • od: dump files in octal and other formats PS: I changed the wildcard in the fourth suggested solution from “?” to “*” to arrive at my solution. (Ubuntu 14.04) ### Orwell: The surveillance game that puts you in Big Brother’s shoes [Echoes of Enders Game?] Sunday, November 13th, 2016 From the post: “Big Brother has arrived — and it’s you.” As CNET’s resident privacy nark, I didn’t need much convincing to play a game all about social engineering and online surveillance. But when I stepped into my role as a new recruit for the fictional Orwell internet surveillance program, I didn’t expect to find the rush of power so beguiling, or unsettling. Developed by German outfit Osmotic Studios, Orwell sees you working as a new recruit in a surveillance agency of the same name, following a series of terrorist attacks in Bonton, the fictional capital of The Nation. As an agent, you are responsible for scraping social media feeds, blogs, news sites and the private communications of the Nation’s citizens to find those with connections to the bombings. You start with your first suspect before working through a web of friends and associates. You’re after data chunks — highlighted pieces of information and text found in news stories, websites and blogs that can be dragged and uploaded into the Orwell system and permanently stored as evidence. The whole game has a kind of polygon graphic aesthetic, making the news clippings, websites and social media feeds you’re trawling feel close to the real thing. But as with everything in Orwell, it’s viewed through a glass, darkly. If you are a game player, this sounds wickedly seductive. If your not, what if someone weaponized Orwell so that what appear to be “in the game” hacks are hacks in the “real world?” A cybersecurity “Enders Game” where the identity of targets and consequences of attacks are concealed from hackers? Are the identity of targets or consequences of attacks your concern? Or is credit for breaching defenses and looting data enough? Before reaching that level of simulation, imagine changing from the lone/small group hacker model to a more distributed model. Where anonymous hackers offer specialized skills, data or software in collaboration on proposed hacks. Ideas on the requirements for such a collaborative system? Assuming nation states get together on cybersecurity, it could be a mechanism to match or even out perform such efforts. ### Torturing Iraqi Prisoners – Roles for Heroes like Warrant Officer Hugh Thompson? Monday, August 1st, 2016 Kaveh Waddell pens a troubling story in A Video Game That Lets You Torture Iraqi Prisoners, which reads in part: What if there were a way to make sense of state-sanctioned torture in a more visceral way than by reading a news article or watching a documentary? Two years ago, that’s exactly what a team of Pittsburgh-based video-game designers set out to create: an experience that would bring people uncomfortably close to the abuses that took place in one particularly infamous prison camp. In the game, which is still in development, players assume the role of an American service member stationed at Camp Bucca, a detention center that was located near the port city of Umm Qasr in southeast Iraq, at an undetermined time during the Iraq War. Throughout the game, players interact with Iraqi prisoners, who are clothed in the camp’s trademark yellow jumpsuits and occasionally have black hoods pulled over their heads. The player must interrogate the prisoners, choosing between methods like waterboarding or electrocution to extract information. If an interrogation goes too far, the questioner can kill the prisoner. Players also have to move captives around the prison camp, arranging them in cell blocks throughout the area. Camp Bucca is best known for incubating the group of fighters who would go on to create ISIS: The group’s leader, Abu Bakr al-Baghdadi, was held there for five years, where he likely forged many of the connections that make up the group’s network today. The developers say they chose to have the player wrestle with cell assignments to underscore the role of American prison camps in radicalizing the next generation of fighters and terrorists. The developers relied on allegations of prisoner abuse in archived news articles and a leaked Red Cross report to guide their game design. While there were many reports of prisoner abuse at Camp Bucca, they were never so widespread as to prompt an official public investigation. I find the hope that the game will convey: “the firsthand revulsion of being in the position of torturer.” unrealistic in light of the literature on Stanley Milgram’s electric-shock studies. In the early 1960’s Milgram conducted a psychology experiment where test subjects (who were actors and not harmed) could be shocked by student volunteers, under the supervision of an experimenter. The shocks went all the way to 450 volts and a full 65% of the volunteers when all the way to 450 with the test subject screaming in pain. Needless to say, the literature on that experiment has spanned decades, including re-enactments, some of which includes: The Game of Death: France’s Shocking TV Experiment by Bruce Crumley. Original materials: Obedience to Authority in the Archive From the webpage: Stanley Milgram, whose papers are held in Manuscripts and Archives, conducted the Obedience to Authority experiments while he was an assistant professor at Yale University from 1961 to 1963. Milgram found that most ordinary people obeyed instructions to give what they believed to be potentially fatal shocks to innocent victims when told to do so by an authority figure. His 1963 article[i] on the initial findings and a subsequent book, Obedience to Authority and Experimental View (1974), and film, Obedience (1969), catapulted Milgram to celebrity status and made his findings and the experiments themselves the focus of intense ethical debates.[ii] Fifty years later the debates continues. The Yale University Library acquired the Stanley Milgram Papers from Alexandra Milgram, his widow, in July 1985, less than a year after Milgram’s death. Requests for access started coming in soon after. The collection remained closed to research for several years until processed by archivist Diane Kaplan. In addition to the correspondence, writings, subject files, and teaching files often found in the papers of academics, the collection also contains the data files for Milgram’s experiments, including administrative records, notebooks, files on experimental subjects, and audio recordings of experimental sessions, debriefing sessions, and post-experiment interviews. The only redeeming aspect of the experiment and real life situations like My Lai, is that not everyone is willing to tolerate or commit outrageous acts. Hopeful the game will include roles for people like Warrant Officer Hugh Thompson who ended the massacre at My Lai by interposing his helicopter between American troops and retreating villagers and turned his weapons on the American troops. Would you pull your weapon on a fellow member of the service to stop torturing of an Iraqi prisoner? Would you use your weapon on a fellow member of the service to stop torturing of an Iraqi prisoner? Would you? Survey says: At least 65% of you would not. ### #AlphaGo Style Monte Carlo Tree Search In Python Sunday, March 27th, 2016 Raymond Hettinger (@raymondh) tweeted the following links for anyone who wants an #AlphaGo style Monte Carlo Tree Search in Python: Introduction to Monte Carlo Tree Search by Jeff Bradberry. Jeff’s post is your guide to Monte Carlo Tree Search in Python while Cameron’s site bills itself as: This site is intended to provide a comprehensive reference point for online MCTS material, to aid researchers in the field. I didn’t see any dated later than 2010 on Cameron’s site. Suggestions for other collections of MCTS material that are more up to date? ### AI Masters Go, Twitter, Not So Much (Log from @TayandYou?) Thursday, March 24th, 2016 From the post: A day after Microsoft introduced an innocent Artificial Intelligence chat robot to Twitter it has had to delete it after it transformed into an evil Hitler-loving, incestual sex-promoting, ‘Bush did 9/11’-proclaiming robot. Developers at Microsoft created ‘Tay’, an AI modelled to speak ‘like a teen girl’, in order to improve the customer service on their voice recognition software. They marketed her as ‘The AI with zero chill’ – and that she certainly is. The headline was suggested to me by a tweet from Peter Seibel: Interesting how wide the gap is between two recent AI: AlphaGo and TayTweets. The Turing Test is *hard*. http://gigamonkeys.com/turing/. In preparation for the next AI celebration, does anyone have a complete log of the tweets from Tay Tweets? I prefer non-revisionist history where data doesn’t disappear. You can imagine the use Stalin would have made of that capability. ### Project AIX: Using Minecraft to build more intelligent technology Monday, March 14th, 2016 From the post: In the airy, loft-like Microsoft Research lab in New York City, five computer scientists are spending their days trying to get a Minecraft character to climb a hill. That may seem like a pretty simple job for some of the brightest minds in the field, until you consider this: The team is trying to train an artificial intelligence agent to learn how to do things like climb to the highest point in the virtual world, using the same types of resources a human has when she learns a new task. That means that the agent starts out knowing nothing at all about its environment or even what it is supposed to accomplish. It needs to understand its surroundings and figure out what’s important – going uphill – and what isn’t, such as whether it’s light or dark. It needs to endure a lot of trial and error, including regularly falling into rivers and lava pits. And it needs to understand – via incremental rewards – when it has achieved all or part of its goal. “We’re trying to program it to learn, as opposed to programming it to accomplish specific tasks,” said Fernando Diaz, a senior researcher in the New York lab and one of the people working on the project. The research project is possible thanks to AIX, a platform developed by Katja Hofmann and her colleagues in Microsoft’s Cambridge, UK, lab and unveiled publicly on Monday. AIX allows computer scientists to use the world of Minecraft as a testing ground for conducting research designed to improve artificial intelligence. The project is in closed beta now but said to be going open source in the summer of 2016. Someone mentioned quite recently the state of documentation on Minecraft. Their impression was there is a lot of information but poorly organized. If you are interested in exploring Minecraft for the release this summer, see: How to Install Minecraft on Ubuntu or Any Other Linux Distribution. ### PyGame, Pong and Tensorflow Monday, March 14th, 2016 Daniel Slater has a couple of posts of interest to AI game followers: How to run learning agents against PyGame Deep-Q learning Pong with Tensorflow and PyGame If you like low-end video games… 😉 Seriously, the principles here can be applied to more complex situations and video games. Enjoy! ### Lee Sedol “busted up” AlphaGo – Game 4 Monday, March 14th, 2016 From the post: Expectations were modest on Sunday, as Lee Sedol 9p faced the computer Go program AlphaGo for the fourth time. Lee Sedol 9 dan, obviously relieved to win his first game. After Lee lost the first three games, his chance of winning the five game match had evaporated. His revised goal, and the hope of millions of his fans, was that he might succeed in winning at least one game against the machine before the match concluded. However, his prospects of doing so appeared to be bleak, until suddenly, just when all seemed to be lost, he pulled a rabbit out of a hat. And he didn’t even have a hat! Lee Sedol won game four by resignation. A reversal of roles but would you say that Sedol “busted up” AlphaGo? Looking forward to the results of Game 5! ### Writing Games with Emacs Sunday, February 28th, 2016 Writing Games with Emacs by Zachary Kanfer. (video) From the description: Games are a great way to get started writing programs in any language. In Emacs Lisp, they’re even better—you use the same exact techniques to extend Emacs, configuring it to do what you want. In this presentation, Zachary Kanfer livecodes tic-tac-toe. You’ll see how to create a basic major mode, make functions, store state, and set keybindings. You can grab the source code at: zck.me. Ready to build some muscle memory? ### The Hitchhiker’s Guide to the Galaxy Game – 30th Anniversary Edition Thursday, February 11th, 2016 The Hitchhiker’s Guide to the Galaxy Game – 30th Anniversary Edition From the webpage: ### A word of warning The game will kill you frequently. It’s a bit mean like that. If in doubt, before you make a move, please save your game by typing “Save” then enter. You can then restore your game by typing “Restore” then enter. This should make it slightly less annoying getting killed all the time as you can go back to where you were before it happened. You’ll need to be signed in for this to work. You can sign in or register by clicking the BBCiD icon next to the BBC logo in the top navigation bar. Signing in will also allow you to tweet about your achievements, and to add a display name so you can get onto the high score tables. Take fair warning, you can lose hours if not days playing this game. The graphics may help orient yourself in the various locations. That was missing in the original game. If you maintain focus on the screen, you can use your keyboard for data entry. Graphics are way better now but how do you compare the game play to current games? Enjoy! More details: About the game Game Technical FAQ ### Cheating Cheaters [Honeypots for Government Agencies?] Wednesday, February 3rd, 2016 From the post: A Reddit user decided to tackle the issue of cheaters within Valve’s multiplayer shooter Counter Strike: Global Offensive in their own unique way: by luring them towards fake “multihacks” that promised a motherlode of cheating tools, but in reality, were actually traps designed to cause the users who installed them to eventually receive bans. The first two were designed as time bombs, which activated functions designed to trigger bans after a specific time of day. The third, which was downloaded over 3,500 times, caused instantaneous bans. I wonder if anyone is running honeypots for intelligence agencies? Or fake jihad sites for our friends in law enforcement? Sort of a Spy vs. Spy situation, yes? Cyber-dueling with government before you aren’t wearing protective gear and the tips aren’t blunted. ### [A] Game-Changing Go Engine (Losing mastery over computers? Hardly.) Friday, December 4th, 2015 How Facebook’s AI Researchers Built a Game-Changing Go Engine From the post: One of the last bastions of human mastery over computers is the game of Go—the best human players beat the best Go engines with ease. That’s largely because of the way Go engines work. These machines search through all possible moves to find the strongest. While this brute force approach works well in draughts and chess, it does not work well in Go because of the sheer number of possible positions on a board. In draughts, the number of board positions is around 10^20; in chess it is 10^60. But in Go it is 10^100—that’s significantly more than the number of particles in the universe. Searching through all these is unfeasible even for the most powerful computers. So in recent years, computer scientists have begun to explore a different approach. Their idea is to find the most powerful next move using a neural network to evaluate the board. That gets around the problem of searching. However, neural networks have yet to match the level of good amateur players or even the best search-based Go engines. Today, that changes thanks to the work of Yuandong Tian at Facebook AI Research in Menlo Park and Yan Zhu at Rutgers University in New Jersey. These guys have combined a powerful neural network approach with a search-based machine to create a Go engine that plays at an impressively advanced level and has room to improve. The new approach is based in large part on advances that have been made in neural network-based machine learning in just the last year or two. This is the result of a better understanding of how neural networks work and the availability of larger and better databases to train them. This is how Tian and Zhu begin. They start with a database of some 250,000 real Go games. They used 220,000 of these as a training database. They used the rest to test the neural network’s ability to predict the next moves that were played in real games. If you want the full details, check out: Abstract: Competing with top human players in the ancient game of Go has been a long-term goal of artificial intelligence. Go’s high branching factor makes traditional search techniques ineffective, even on leading-edge hardware, and Go’s evaluation function could change drastically with one stone change. Recent works [Maddison et al. (2015); Clark & Storkey (2015)] show that search is not strictly necessary for machine Go players. A pure pattern-matching approach, based on a Deep Convolutional Neural Network (DCNN) that predicts the next move, can perform as well as Monte Carlo Tree Search (MCTS)-based open source Go engines such as Pachi [Baudis & Gailly (2012)] if its search budget is limited. We extend this idea in our bot named darkforest, which relies on a DCNN designed for long-term predictions. Darkforest substantially improves the win rate for pattern-matching approaches against MCTS-based approaches, even with looser search budgets. Against human players, darkforest achieves a stable 1d-2d level on KGS Go Server, estimated from free games against human players. This substantially improves the estimated rankings reported in Clark & Storkey (2015), where DCNN-based bots are estimated at 4k-5k level based on performance against other machine players. Adding MCTS to darkforest creates a much stronger player: with only 1000 rollouts, darkforest+MCTS beats pure darkforest 90% of the time; with 5000 rollouts, our best model plus MCTS beats Pachi with 10,000 rollouts 95.5% of the time. The author closes with this summary: This kind of research is still in its early stages, so improvements are likely in the near future. It may be that humans are about to lose their mastery over computers in yet another area. I may have to go read the article again because the program as described: • Did not invent the game of Go or any of its rules. • Did not play any of the 220,000 actual Go games used for training. That is to say that the Game of Go was invented by people and people playing Go supplied the basis for this Go playing computer. Not to take anything away from the program or these researchers, but humans are hardly about to lose “…mastery over computers in yet another area.” Humans remain the creators of such games, the source of training data and the measure against which the computer measures itself. Who do you think is master in such a relationship?* * Modulo that the DHS wants to make answers from computers to be the basis for violating your civil liberties. But that’s a different type of “mastery” issue. ### Bytes that Rock! Software Awards 2015 (Nominations Open Now – Close 16th November 2015) Friday, November 13th, 2015 Bytes that Rock! Software Awards 2015 (Nominations Open Now – Close 16th November 2015) An awards program for excellence in software and blogs! The only limitation I could find is: Bytes that Rock recognizes the best software and blogs for their excellence in the past 12 months. Your game/software/blog may have been excellent three (3) years ago but that doesn’t count. 😉 Subject to that mild limitation, step up and: Submit a blog, software or game clicking on the categories below! This is not a next week, or after I ask X, or when I get home task. This is a hit a submit link now task! You will feel better after having made a nomination. Promise. 😉 (Select the graphic for a much larger version of the image.) ### Visualizing Chess Data With ggplot Monday, November 2nd, 2015 Sales of traditional chess sets peak during the holiday season. The following graphic does not include sales of chess gadgets, chess software, or chess computers: (Source: Terapeak Trends: Which Tabletop Games Sell Best on eBay? by Aron Hsiao.) Joshua’s post is a guide to using and visualizing chess data under the following topics: 1. The Data 2. Piece Movements 3. Survival rates 4. Square usage by player 5. Distributions for the first movement 6. Who captures whom Joshua is using public chess data but it’s just a short step to using data from your own chess games or those of friends from your local chess club. 😉 Visualize the play of openings, defenses, players + openings/defenses, you are limited only by your imagination. Give a chess friend a visualization they can’t buy in any store! PS: Check out: rchess a Chess Package for R also by Joshua Kunst. I first saw this in a tweet by Christophe Lalanne. ### Parens of the Dead Friday, August 21st, 2015 Parens of the Dead: A screencast series of zombie-themed games written with Clojure and ClojureScript. Three episodes posted thus far: Episode 1: Lying in the Ground Starting with an empty folder, we’ll lay the technical groundwork for our game. We’ll get a Clojure web server up and running, compiling and sending ClojureScript to the browser. Episode 2: Frontal Assualt In this one, we create most of the front-end code. We take a look at the data structure that describes the game, using that to build up our UI. Episode 3: What Lies Beneath The player has only one action available; revealing a tile. We’ll start implementing the central ‘reveal-tile’ function on the backend, writing tests along the way. Next episode? Follow @parensofthedead Another innovative instruction technique! Suggestions: 1) Have your volume control available because I found the sound in the screencasts to be very soft. 2) Be prepared to move very quickly as episode one, for example, is only eleven minutes long. 3) Download the code and walk through it at a slower pace. Enjoy! ### Chinese Tradition Inspires Machine Learning Advancements, Product Contributions Friday, March 6th, 2015 Chinese Tradition Inspires Machine Learning Advancements, Product Contributions by George Thomas Jr.. From the post: A new online Chinese riddle game is steeped in more than just tradition. In fact, the machine learning and artificial intelligence that fuels it derives from years of research that also helps drive Bing Search, Bing Translator, the Translator App for Windows Phone, and other products. Launched in celebration of the Chinese New Year, Microsoft Chinese Character Riddle is based on the two-player game unique to Chinese traditional culture and part of the Chinese Lantern Festival. Developed by the Natural Language Computing Group in the Microsoft Research's Beijing lab, the game not only quickly returns an answer to a user's riddle, but also works in reverse: when a user enters a single Chinese character as the intended answer, the system generates several riddles from which to choose. "These innovations typically embody the strategic thought of Natural Language Processing 2.0, which is to collect big data on the Internet, to automatically build AI models using statistical machine learning methods, and to involve users in the innovation process by quickly getting their on-line feedback." Says Dr. Ming Zhou, Group Leader for Natural Language Computing Group and Principal Researcher at Microsoft Research Asia. "Thus the riddle system will continue to improve." I don’t know any Chinese characters at all so others will need to judge the usefulness of this machine learning version. I did find a general resource on Riddles about Chinese Characters. What other word or riddle games would pose challenges for machine learning? I first saw this in a tweet by Microsoft Research. ### Clojure and Overtone Driving Minecraft Sunday, March 1st, 2015 Clojure and Overtone Driving Minecraft by Joseph Wilk. From the post: Using Clojure we create interesting 3D shapes in Minecraft to the beat of music generated from Overtone. We achieve this by embedding a Clojure REPL inside a Java Minecraft server which loads Overtone and connects to an external Supercollider instance (What Overtone uses for sound). Speaking of functional programming, you may find this useful. The graphics and music are impressive! ### The New Chess World Champion Tuesday, December 30th, 2014 From the post: Larry Kaufman is a Grandmaster of chess, and has teamed in the development of two champion computer chess programs, Rybka and Komodo. I have known him from chess tournaments since the 1970s. He earned the title of International Master (IM) from the World Chess Federation in 1980, a year before I did. He earned his GM title in 2008 by dint of winning the World Senior Chess Championship, equal with GM Mihai Suba. Today we salute Komodo for winning the 7th Thoresen Chess Engines Competition (TCEC), which some regard as the de-facto world computer chess championship. Partially computer chess history and present with asides on Shogi, dots-and-boxes, Arimaa, and Go. Regan forgets to mention that thus far, computers don’t compete at all in the game of thrones. No one has succeeded in teaching a computer to lie. That is knowing the correct answer and for motives of its own, concealing that answer and offering another. PS: Komodo (commercial,$59.96)

Stockfish (open source)

### How to Win at Rock-Paper-Scissors

Friday, December 26th, 2014

How to Win at Rock-Paper-Scissors

From the post:

The first large-scale measurements of the way humans play Rock-Paper-Scissors reveal a hidden pattern of play that opponents can exploit to gain a vital edge.

If you’ve ever played Rock-Paper-Scissors, you’ll have wondered about the strategy that is most likely to beat your opponent. And you’re not alone. Game theorists have long puzzled over this and other similar games in the hope of finding the ultimate approach.

It turns out that the best strategy is to choose your weapon at random. Over the long run, that makes it equally likely that you will win, tie, or lose. This is known as the mixed strategy Nash equilibrium in which every player chooses the three actions with equal probability in each round.

And that’s how the game is usually played. Various small-scale experiments that record the way real people play Rock-Paper-Scissors show that this is indeed the strategy that eventually evolves.

No, I’m not going to give away the answer!

I will only say the answer isn’t what has been previously thought.

Why the different answer? Well, the authors speculate (with some justification) that the smallness of prior experiments resulted in the non-exhibition of a data pattern that was quite obvious when done on a larger scale.

Given that N < 100 in so many sociology, psychology, and other social science experiments, the existing literature offers a vast number of opportunities where repeating small experiments on large scale could produce different results. If you have any friends in a local social science department, you might want to suggest this to them as a way to be on the front end of big data in social science. PS: If you have access to a social science index, please search and post a rough count of participants < 100 in some subset of social science journals. Say since 1970. Thanks!

### Teaching Deep Convolutional Neural Networks to Play Go

Saturday, December 20th, 2014

Teaching Deep Convolutional Neural Networks to Play Go by Christopher Clark and Amos Storkey.

Abstract:

Mastering the game of Go has remained a long standing challenge to the field of AI. Modern computer Go systems rely on processing millions of possible future positions to play well, but intuitively a stronger and more ‘humanlike’ way to play the game would be to rely on pattern recognition abilities rather then brute force computation. Following this sentiment, we train deep convolutional neural networks to play Go by training them to predict the moves made by expert Go players. To solve this problem we introduce a number of novel techniques, including a method of tying weights in the network to ‘hard code’ symmetries that are expect to exist in the target function, and demonstrate in an ablation study they considerably improve performance. Our final networks are able to achieve move prediction accuracies of 41.1% and 44.4% on two different Go datasets, surpassing previous state of the art on this task by significant margins. Additionally, while previous move prediction programs have not yielded strong Go playing programs, we show that the networks trained in this work acquired high levels of skill. Our convolutional neural networks can consistently defeat the well known Go program GNU Go, indicating it is state of the art among programs that do not use Monte Carlo Tree Search. It is also able to win some games against state of the art Go playing program Fuego while using a fraction of the play time. This success at playing Go indicates high level principles of the game were learned.

If you are going to pursue the study of Monte Carlo Tree Search for semantic purposes, there isn’t any reason to not enjoy yourself as well. 😉

And following the best efforts in game playing will be educational as well.

I take the efforts at playing Go by computer as well as those for chess, as indicating how far ahead humans are to AI.

Both of those two-player, complete knowledge games were mastered long ago by humans. Multi-player games with extended networds of influence and motives, not to mention incomplete information as well, seem securely reserved for human players for the foreseeable future. (I wonder if multi-player scenarios are similar to the multi-body problem in physics? Except with more influences.)

I first saw this in a tweet by Ebenezer Fogus.

### Monte-Carlo Tree Search for Multi-Player Games [Semantics as Multi-Player Game]

Saturday, December 20th, 2014

Monte-Carlo Tree Search for Multi-Player Games by Joseph Antonius Maria Nijssen.

From the introduction:

The topic of this thesis lies in the area of adversarial search in multi-player zero-sum domains, i.e., search in domains having players with conflicting goals. In order to focus on the issues of searching in this type of domains, we shift our attention to abstract games. These games provide a good test domain for Artificial Intelligence (AI). They offer a pure abstract competition (i.e., comparison), with an exact closed domain (i.e., well-defined rules). The games under investigation have the following two properties. (1) They are too complex to be solved with current means, and (2) the games have characteristics that can be formalized in computer programs. AI research has been quite successful in the field of two-player zero-sum games, such as chess, checkers, and Go. This has been achieved by developing two-player search techniques. However, many games do not belong to the area where these search techniques are unconditionally applicable. Multi-player games are an example of such domains. This thesis focuses on two different categories of multi-player games: (1) deterministic multi-player games with perfect information and (2) multi-player hide-and-seek games. In particular, it investigates how Monte-Carlo Tree Search can be improved for games in these two categories. This technique has achieved impressive results in computer Go, but has also shown to be beneficial in a range of other domains.

This chapter is structured as follows. First, an introduction to games and the role they play in the field of AI is provided in Section 1.1. An overview of different game properties is given in Section 1.2. Next, Section 1.3 defines the notion of multi-player games and discusses the two different categories of multi-player games that are investigated in this thesis. A brief introduction to search techniques for two-player and multi-player games is provided in Section 1.4. Subsequently, Section 1.5 defines the problem statement and four research questions. Finally, an overview of this thesis is provided in Section 1.6.

This thesis is great background reading on the use of Monte-Carol tree search in games. While reading the first chapter, I realized that assigning semantics to a token is an instance of a multi-player game with hidden information. That is the “semantic” of any token doesn’t exist in some Platonic universe but rather is the result of some N number of players who also accept a particular semantic for some given token in a particular context. And we lack knowledge of the semantic and the reasons for it that will be assigned by some N number of players, which may change over time and context.

The semiotic triangle of Ogden and Richards (The Meaning of Meaning):

for any given symbol, represents the view of a single speaker. But as Ogden and Richards note, what is heard by listeners should be represented by multiple semiotic triangles:

Normally, whenever we hear anything said we spring spontaneously to an immediate conclusion, namely, that the speaker is referring to what we should be referring to were we speaking the words ourselves. In some cases this interpretation may be correct; this will prove to be what he has referred to. But in most discussions which attempt greater subtleties than could be handled in a gesture language this will not be so. (The Meaning of Meaning, page 15 of the 1923 edition)

Is RDF/OWL more subtle than can be handled by a gesture language? If you think so then you have discovered one of the central problems with the Semantic Web and any other universal semantic proposal.

Not that topic maps escape a similar accusation, but with topic maps you can encode additional semiotic triangles in an effort to avoid confusion, at least to the extent of funding and interest. And if you aren’t trying to avoid confusion, you can supply semiotic triangles that reach across understandings to convey additional information.

You can’t avoid confusion altogether nor can you achieve perfect communication with all listeners. But, for some defined set of confusions or listeners, you can do more than simply repeat your original statements in a louder voice.

Whether Monte-Carlo Tree searches will help deal with the multi-player nature of semantics isn’t clear but it is an alternative to repeating “…if everyone would use the same (my) system, the world would be better off…” ad nauseam.

I first saw this in a tweet by Ebenezer Fogus.

### Deep learning for… chess

Monday, December 15th, 2014

Deep learning for… chess by Erik Bernhardsson.

From the post:

I’ve been meaning to learn Theano for a while and I’ve also wanted to build a chess AI at some point. So why not combine the two? That’s what I thought, and I ended up spending way too much time on it. I actually built most of this back in September but not until Thanksgiving did I have the time to write a blog post about it.

Chess sets are a common holiday gift so why not do something different this year?

Pretty print a copy of this post and include a gift certificate from AWS for a GPU instance for say a week to ten days.

I don’t think AWS sells gift certificates, but they certainly should. Great stocking stuffer, anniversary/birthday/graduation present, etc. Not so great for Valentines Day.

If you ask AWS for a gift certificate, mention my name. They don’t know who I am so I could use the publicity. 😉

I first saw this in a tweet by Onepaperperday.

### Jeopardy! clues data

Sunday, November 30th, 2014

Jeopardy! clues data Nathan Yau writes:

Here’s some weekend project data for you. Reddit user trexmatt dumped a dataset for 216,930 Jeopardy! questions and answers in JSON and CSV formats, a scrape from the J! Archive. Each clue is represented by category, money value, the clue itself, the answer, round, show number, and air date.

Nathan suggests hunting for Daily Doubles but then discovers someone has done that. (See Nathan’s post for the details.)

Enjoy!

### DeepView: Computational Tools for Chess Spectatorship [Knowledge Retention?]

Sunday, October 19th, 2014

DeepView: Computational Tools for Chess Spectatorship by Greg Borenstein, Prof. Kevin Slavin, Grandmaster Maurice Ashley.

From the post:

DeepView is a suite of computational and statistical tools meant to help novice viewers understand the drama of a high-level chess match through storytelling. Good drama includes characters and situations. We worked with GM Ashley to indentify the elements of individual player’s styles and the components of an ongoing match that computation could analyze to help bring chess to life. We gathered an archive of more than 750,000 games from chessgames.com including extensive collections of games played by each of the grandmasters in the tournament. We then used the Stockfish open source chess engine to analyze the details of each move within these games. We combined these results into a comprehensive statistical analysis that provided us with meaningful and compelling information to pass on to viewers and to provide to chess commentators to aid in their work.

In addition to making chess more accessible to novice viewers, we believe that providing access to these kinds of statistics will change how expert players play chess, allowing them to prepare differently for specific opponents and to detect limitations or quirks in their own play.

Further, we believe that the techniques used here could be applied to other sports and games as well. Specifically we wonder why traditional sports broadcasting doesn’t use measures of significance to filter or interpret the statistics they show to their viewers. For example, is a batter’s RBI count actually informative without knowing whether it is typical or extraordinary compared to other players? And when it comes to eSports with their exploding viewer population, this approach points to rich possibilities improving the spectator experience and translating complex gameplay so it is more legible for novice fans.

A deeply intriguing notion of mining data to extract patterns that are fashioned into a narrative by an expert.

Participants in the games were not called upon to make explicit the tacit knowledge they unconsciously rely upon to make decisions. Instead, decisions (moves) were collated into patterns and an expert recognized those patterns to make the tacit knowledge explicit.

Outside of games would this be a viable tactic for knowledge retention? Not asking employees/experts but recording their decisions and mining those for later annotation?

### Clojure in Unity 3D: Functional Video Game Development

Thursday, September 25th, 2014

Clojure in Unity 3D: Functional Video Game Development by Ramsey Nasser and Tims Gardner.

I had never considered computer games from this perspective:

You have to solve every hard problem in computer science, 60 times a second. Brandon Bloom.

Great presentation, in part because of its focus on demonstrating results. Interested viewers left to consult the code for the details.

Combines Clojure with Unity (a game engine) that can export to PS4.

Is being enjoyable the primary difference between video games and most program interfaces?

A project to watch!

http://github.com/clojure-unity

Unity (Game Engine) (Windows/Mac OS)

PS: I need to get a PS4 in order to track game development with Clojure. If you want to donate one to that cause, contact me for a shipping address.

I won’t spend countless hours playing games that are not Clojure related. I am juggling enough roles without adding any fantasy (computer-based anyway) ones. 😉

### T3TROS (ClojureScript)

Thursday, September 4th, 2014

T3TROS (ClojureScript)

From the webpage:

We are re-creating Tetris™ in ClojureScript. We are mainly doing this to produce the pleasure and to celebrate the 30th anniversary of its original release in 1984. Our remake will enable us to host a small, local tournament and to share a montage of the game’s history, with each level resembling a different version from its past. (We are working on the game at least once a week):

• DevBlog 1 – data, collision, rotation, drawing
• DevBlog 2 – basic piece control
• DevBlog 3 – gravity, stack, collapse, hard-drop
• DevBlog 4 – ghost piece, flash before collapse
• DevBlog 5 – game over animation, score
• DevBlog 6 – level speeds, fluid drop, improve collapse animation, etc.
• DevBlog 7 – draw next piece, tilemap for themes
• DevBlog 8 – allow connected tiles for richer graphics
• DevBlog 9 – live board broadcasting
• DevBlog 10 – chat room, more tilemaps, page layouts
• DevBlog 11 – page routing, username

What could possibly go wrong with an additive video game as the target of a programming exercise? 😉

Shaun LeBron has posted Interactive Guide to Tetrix in ClojureScript.

The interactive guide is very helpful!

Will echoes of Tetris™ tempt you into functional programming? What video classics will you produce?

### Game Dialogue + FOL + Clojure

Thursday, August 28th, 2014

Representing Game Dialogue as Expressions in First Order Logic by Kaylen Wheeler.

Abstract:

Despite advancements in graphics, physics, and artificial intelligence, modern video games are still lacking in believable dialogue generation. The more complex and interactive stories in modern games may allow the player to experience diffierent paths in dialogue trees, but such trees are still required to be manually created by authors. Recently, there has been research on methods of creating emergent believable behaviour, but these are lacking true dialogue construction capabilities. Because the mapping of natural language to meaningful computational representations (logical forms) is a difficult problem, an important first step may be to develop a means of representing in-game dialogue as logical expressions. This thesis introduces and describes a system for representing dialogue as first-order logic predicates, demonstrates its equivalence with current dialogue authoring techniques, and shows how this representation is more dynamic and flexible.

If you remember the Knights and Knaves from Labyrinth or other sources, you will find this an enjoyable read. After solving the puzzle, Kaylen’s discussion shows that a robust solution requires information hiding and the capacity for higher-order questioning.

Clojure fans will appreciate the use of clojure.core.logic.

Enjoy!

I first saw this in a tweet by David Nolen.

### A first-person engine in 265 lines

Tuesday, June 3rd, 2014

A first-person engine in 265 lines

From the post:

Today, let’s drop into a world you can reach out and touch. In this article, we’ll compose a first-person exploration from scratch, quickly and without difficult math, using a technique called raycasting. You may have seen it before in games like Daggerfall and Duke Nukem 3D, or more recently in Notch Persson’s ludum dare entries. If it’s good enough for Notch, it’s good enough for me!

Not a short exercise but I like the idea of quick to develop interfaces.

Do you know if in practice it makes it easier to change/discard interfaces?

Thanks!

I first saw this in a tweet by Hunter Loftis.

### Game development in Clojure (with play-clj)

Monday, May 19th, 2014

Uses Light Table so you will be getting an introduction to Light Table as well.

If you think about it, enterprise searches are very much treasure hunt adventures with poor graphics and no avatars. 😉