Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

February 14, 2018

Phaser (Game/Training Framework)

Filed under: Education,Games — Patrick Durusau @ 11:13 am

Their graphic, certainly not mine!

From the webpage:

Desktop and Mobile HTML5 game framework. A fast, free and fun open source framework for Canvas and WebGL powered browser games.

Details: Phaser

Do you use games for learning?

For example, almost everyone recognizes the moral lepers in Congress, face on with a TV caption.

But how many of us could perform the same feat in a busy airport or in poor light?

Enter game learning/training!

Photos are easy enough to find and with Gimp you can create partially obscured faces.

Of course, points should be deducted for “recognizing” the wrong face or failing to recognize a “correct” face.

Game action after the point of recognition is up to you. Make it enjoyable if not addictive.

Ping me with your political action games, patrick@durusau.net. No prizes but if I see a particularly clever or enjoyable one, I’ll give a shout out to it.

February 3, 2018

Where Are Topic Mappers Today? Lars Marius Garshol

Filed under: Games,PageRank — Patrick Durusau @ 11:37 am

Some are creating new children’s games:

If you’re interested, Ian Rogers has a complete explanation with examples at: The Google Pagerank Algorithm and How It Works or a different take with a table of approximate results at: RITE Wiki: Page Rank.

Unfortunately, both Garshol and Wikipedia’s PageRank page get the Google pagerank algorithm incorrect.

The correct formulation reads:

The results of reported algorithm are divided by U.S. Government Interference, an unknown quantity.

Perhaps that is why Google keeps its pagerank calculation secret. If I were an allegedly sovereign nation, I would keep Google’s lapdog relationship to the U.S. government firmly in mind.

October 13, 2017

A cRyptic crossword with an R twist

Filed under: Games,Humor,R — Patrick Durusau @ 3:14 pm

A cRyptic crossword with an R twist

From the post:

Last week’s R-themed crossword from R-Ladies DC was popular, so here’s another R-related crossword, this time by Barry Rowlingson and published on page 39 of the June 2003 issue of R-news (now known as the R Journal). Unlike the last crossword, this one follows the conventions of a British cryptic crossword: the grid is symmetrical, and eschews 4×4 blocks of white or black squares. Most importantly, the clues are in the cryptic style: rather than being a direct definition, cryptic clues pair wordplay (homonyms, anagrams, etc) with a hidden definition. (Wikipedia has a good introduction to the types of clues you’re likely to find.) Cryptic crosswords can be frustrating for the uninitiated, but are fun and rewarding once you get to into it.

In fact, if you’re unfamiliar with cryptic crosswords, this one is a great place to start. Not only are many (but not all) of the answers related in some way to R, Barry has helpfully provided the answers along with an explanation of how the cryptic clue was formed. There’s no shame in peeking, at least for a few, to help you get your legs with the cryptic style.

Another R crossword for your weekend enjoyment!

Enjoy!

September 19, 2017

Build a working game of Tetris in Conway’s Game of Life (brain candy)

Filed under: Cellular Automata,Game of Life,Games — Patrick Durusau @ 7:34 pm

Build a working game of Tetris in Conway’s Game of Life

From the webpage:

In Conway’s Game of Life, there exist constructs such as the metapixel which allow the Game of Life to simulate any other Game-of-Life rule system as well. In addition, it is known that the Game of Life is Turing-complete.

Your task is to build a cellular automaton using the rules of Conway’s game of life that will allow for the playing of a game of Tetris.

Your program will receive input by manually changing the state of the automaton at a specific generation to represent an interrupt (e.g. moving a piece left or right, dropping it, rotating it, or randomly generating a new piece to place onto the grid), counting a specific number of generations as waiting time, and displaying the result somewhere on the automaton. The displayed result must visibly resemble an actual Tetris grid.

Your program will be scored on the following things, in order (with lower criteria acting as tiebreakers for higher criteria):

  • Bounding box size — the rectangular box with the smallest area that completely contains the given solution wins.
  • Smaller changes to input — the fewest cells (for the worst case in your automaton) that need to be manually adjusted for an interrupt wins.
  • Fastest execution — the fewest generations to advance one tick in the simulation wins.
  • Initial live cell count — smaller count wins.
  • First to post — earlier post wins.

A challenge that resulted in one and one-half years of effort by an array of participants to create an answer.

Very deep and patient thinking here.

Good training for the efforts that will defeat both government security forces and DRM on the web.

September 18, 2017

Game of Thrones, Murder Network Analysis

Filed under: Games,Graphs,Networks,Social Graphs,Social Networks,Visualization — Patrick Durusau @ 1:03 pm

Game of Thrones, Murder Network Analysis by George McIntire.

From the post:

Everybody’s favorite show about bloody power struggles and dragons, Game of Thrones, is back for its seventh season. And since we’re such big GoT fans here, we just had to do a project on analyzing data from the hit HBO show. You might not expect it, but the show is rife with data and has been the subject of various data projects from data scientists, who we all know love to combine their data powers with the hobbies and interests.

Milan Janosov of the Central European University devised a machine learning algorithm to predict the death of certain characters. A handy tool, for any fan tired of being surprised by the shock murders of the show. Dr. Allen Downey, author of the popular ThinkStats textbooks conducted a Bayesian analysis of the characters’ survival rate in the show. Data Scientist and biologist Shirin Glander applied social network analysis tools to analyze and visualize the family and house relationships of the characters.

The project we did is quite similar to that of Glander’s, we’ll be playing around with network analysis, but with data on the murderers and their victims. We constructed a giant network that maps out every murder of character’s with minor, recurring, and major roles.

The data comes courtesy of Ændrew Rininsland of The Financial Times, who’s done a great of collecting, cleaning, and formatting the data. For the purposes of this project, I had to do a whole lot of wrangling and cleaning of my own and in addition to my subjective decisions about which characters to include as well and what constitutes a murder. My finalized dataset produced a total of of 240 murders from 79 killers. For my network graph, the data produced a total of 225 nodes and 173 edges.

I prefer the Game of Thrones (GoT) books over the TV series. The text exercises a reader’s imagination in ways that aren’t matched by visual media.

That said, the TV series murder data set (Ændrew Rininsland of The Financial Times) is a great resource to demonstrate the power of network analysis.

After some searching, it appears that sometime in 2018 is the earliest date for the next volume in the GoT series. Sorry.

September 5, 2017

Chess Captcha (always legal moves?)

Filed under: Games,Security — Patrick Durusau @ 7:00 pm

I saw this on Twitter. Other games you would use for a captcha?

Graham Cluley says chess captchas aren’t hard to defeat in: Chess CAPTCHA – a serious defence against spammers?

But Cluley, like most users, is assuming a chess captcha has a chess legal solution.

What if the solution is an illegal move? Or more than one illegal move?

An illegal move would put the captcha beyond any standard chess program.

Yes?

Reserving access to those told of the solution.

June 30, 2017

Wikipedia: The Text Adventure

Filed under: Games,Wikipedia — Patrick Durusau @ 2:52 pm

Wikipedia: The Text Adventure by Kevan Davis.

You can choose a starting location or enter any Wikipedia article as your starting point. Described by Davis as interactive non-fiction.

Interesting way to learn an area but the graphics leave much to be desired.

If you like games, see http://kevan.org/ for a number of games designed by Davis.

Paparazzi, for example, with some modifications, could be adapted to being a data mining exercise with public data feeds. “Public” in the sense the data feeds are being obtained from public cameras.

Images from congressional hearings for example. All of the people in those images, aside from the members of congress and the witness, have identities and possibly access to information of interest to you. The same is true for people observed going to and from federal or state government offices.

Crowd-sourcing identification of people in such images, assuming you have pre-clustered them by image similarity, could make government staff and visitors more transparent than they are at present.

Enjoy the Wikipedia text adventure and mine the list of games for ideas on building data-related games.

December 6, 2016

Writing a Halite Bot in Clojure [Incomplete Knowledge/Deception Bots?]

Filed under: Game Theory,Games,Wargames — Patrick Durusau @ 8:57 pm

Writing a Halite Bot in Clojure by Matt Adereth.

From the post:

Halite is a new AI programming competition that was recently released by Two Sigma and Cornell Tech. It was designed and implemented by two interns at Two Sigma and was run as the annual internal summer programming competition.

While the rules are relatively simple, it proved to be a surprisingly deep challenge. It’s played on a 2D grid and a typical game looks like this:

halite-game-460

Each turn, all players simultaneously issue movement commands to each of their pieces:

  1. Move to an adjacent location and capture it if you are stronger than what’s currently there.
  2. Stay put and build strength based on the production value of your current location.

When two players’ pieces are adjacent to each other, they automatically fight. A much more detailed description is available on the Halite Game Rules page.

Looking at the rules page, it looks like:

  • Bots have accurate knowledge of all positions and values.
  • Deception of bots isn’t permitted.
  • Interesting from a bot programming perspective but lack of knowledge and the ever present danger of deception are integral parts of human games.

    Any bot games that feature both a lack of knowledge and/or deception?

November 22, 2016

Geek Jeopardy – Display Random Man Page

Filed under: Games,Humor — Patrick Durusau @ 2:52 pm

While writing up Julia Evans’ Things to learn about Linux, I thought it would be cool to display random man pages.

Which resulted in this one-liner in an executable file (man-random, invoke ./man-random):

man $(ls /usr/share/man/man* | shuf -n1 | cut -d. -f1)

As written, it displays a random page from the directories man1 – man8.

If you replace /man* with /man1/, you will only get results for man1 (the usual default).

All of which made me think of Geek Jeopardy!

Can you name this commands from their first paragraph descriptions? (omit their names)

  • remove sections from each line of files
  • pattern scanning and processing language
  • stream editor for filtering and transforming text
  • generate random permutations
  • filter reverse line feeds from input
  • dump files in octal and other formats

Looks easy now, but after a few glasses of holiday cheer? With spectators? Ready to try another man page section?

Enjoy!

Solution:

  • cut: remove sections from each line of files
  • awk: pattern scanning and processing language
  • sed: stream editor for filtering and transforming text
  • shuf: generate random permutations
  • col: filter reverse line feeds from input
  • od: dump files in octal and other formats

PS: I changed the wildcard in the fourth suggested solution from “?” to “*” to arrive at my solution. (Ubuntu 14.04)

November 13, 2016

Orwell: The surveillance game that puts you in Big Brother’s shoes [Echoes of Enders Game?]

Filed under: Cybersecurity,Games,Privacy — Patrick Durusau @ 8:40 pm

Orwell: The surveillance game that puts you in Big Brother’s shoes by Claire Reilly.

From the post:

“Big Brother has arrived — and it’s you.”

As CNET’s resident privacy nark, I didn’t need much convincing to play a game all about social engineering and online surveillance.

But when I stepped into my role as a new recruit for the fictional Orwell internet surveillance program, I didn’t expect to find the rush of power so beguiling, or unsettling.

Developed by German outfit Osmotic Studios, Orwell sees you working as a new recruit in a surveillance agency of the same name, following a series of terrorist attacks in Bonton, the fictional capital of The Nation. As an agent, you are responsible for scraping social media feeds, blogs, news sites and the private communications of the Nation’s citizens to find those with connections to the bombings.

You start with your first suspect before working through a web of friends and associates. You’re after data chunks — highlighted pieces of information and text found in news stories, websites and blogs that can be dragged and uploaded into the Orwell system and permanently stored as evidence.

The whole game has a kind of polygon graphic aesthetic, making the news clippings, websites and social media feeds you’re trawling feel close to the real thing. But as with everything in Orwell, it’s viewed through a glass, darkly.

If you are a game player, this sounds wickedly seductive.

If your not, what if someone weaponized Orwell so that what appear to be “in the game” hacks are hacks in the “real world?”

A cybersecurity “Enders Game” where the identity of targets and consequences of attacks are concealed from hackers?

Are the identity of targets or consequences of attacks your concern? Or is credit for breaching defenses and looting data enough?

Before reaching that level of simulation, imagine changing from the lone/small group hacker model to a more distributed model.

Where anonymous hackers offer specialized skills, data or software in collaboration on proposed hacks.

Ideas on the requirements for such a collaborative system?

Assuming nation states get together on cybersecurity, it could be a mechanism to match or even out perform such efforts.

August 1, 2016

Torturing Iraqi Prisoners – Roles for Heroes like Warrant Officer Hugh Thompson?

Filed under: Games,Politics,Video — Patrick Durusau @ 2:53 pm

Kaveh Waddell pens a troubling story in A Video Game That Lets You Torture Iraqi Prisoners, which reads in part:


What if there were a way to make sense of state-sanctioned torture in a more visceral way than by reading a news article or watching a documentary? Two years ago, that’s exactly what a team of Pittsburgh-based video-game designers set out to create: an experience that would bring people uncomfortably close to the abuses that took place in one particularly infamous prison camp.

In the game, which is still in development, players assume the role of an American service member stationed at Camp Bucca, a detention center that was located near the port city of Umm Qasr in southeast Iraq, at an undetermined time during the Iraq War. Throughout the game, players interact with Iraqi prisoners, who are clothed in the camp’s trademark yellow jumpsuits and occasionally have black hoods pulled over their heads. The player must interrogate the prisoners, choosing between methods like waterboarding or electrocution to extract information. If an interrogation goes too far, the questioner can kill the prisoner.

Players also have to move captives around the prison camp, arranging them in cell blocks throughout the area. Camp Bucca is best known for incubating the group of fighters who would go on to create ISIS: The group’s leader, Abu Bakr al-Baghdadi, was held there for five years, where he likely forged many of the connections that make up the group’s network today. The developers say they chose to have the player wrestle with cell assignments to underscore the role of American prison camps in radicalizing the next generation of fighters and terrorists.

The developers relied on allegations of prisoner abuse in archived news articles and a leaked Red Cross report to guide their game design. While there were many reports of prisoner abuse at Camp Bucca, they were never so widespread as to prompt an official public investigation.

I find the hope that the game will convey:

“the firsthand revulsion of being in the position of torturer.”

unrealistic in light of the literature on Stanley Milgram’s electric-shock studies.

In the early 1960’s Milgram conducted a psychology experiment where test subjects (who were actors and not harmed) could be shocked by student volunteers, under the supervision of an experimenter. The shocks went all the way to 450 volts and a full 65% of the volunteers when all the way to 450 with the test subject screaming in pain.

Needless to say, the literature on that experiment has spanned decades, including re-enactments, some of which includes:

Rethinking One of Psychology’s Most Infamous Experiments by Cari Romm.

The Game of Death: France’s Shocking TV Experiment by Bruce Crumley.

Original materials:

Obedience to Authority in the Archive

From the webpage:

Stanley Milgram, whose papers are held in Manuscripts and Archives, conducted the Obedience to Authority experiments while he was an assistant professor at Yale University from 1961 to 1963. Milgram found that most ordinary people obeyed instructions to give what they believed to be potentially fatal shocks to innocent victims when told to do so by an authority figure. His 1963 article[i] on the initial findings and a subsequent book, Obedience to Authority and Experimental View (1974), and film, Obedience (1969), catapulted Milgram to celebrity status and made his findings and the experiments themselves the focus of intense ethical debates.[ii] Fifty years later the debates continues.

The Yale University Library acquired the Stanley Milgram Papers from Alexandra Milgram, his widow, in July 1985, less than a year after Milgram’s death. Requests for access started coming in soon after. The collection remained closed to research for several years until processed by archivist Diane Kaplan. In addition to the correspondence, writings, subject files, and teaching files often found in the papers of academics, the collection also contains the data files for Milgram’s experiments, including administrative records, notebooks, files on experimental subjects, and audio recordings of experimental sessions, debriefing sessions, and post-experiment interviews.

The only redeeming aspect of the experiment and real life situations like My Lai, is that not everyone is willing to tolerate or commit outrageous acts.

Hopeful the game will include roles for people like Warrant Officer Hugh Thompson who ended the massacre at My Lai by interposing his helicopter between American troops and retreating villagers and turned his weapons on the American troops.

Would you pull your weapon on a fellow member of the service to stop torturing of an Iraqi prisoner?

Would you use your weapon on a fellow member of the service to stop torturing of an Iraqi prisoner?

Would you?

Survey says: At least 65% of you would not.

March 27, 2016

#AlphaGo Style Monte Carlo Tree Search In Python

Filed under: Artificial Intelligence,Games,Monte Carlo,Searching — Patrick Durusau @ 6:13 pm

Raymond Hettinger (@raymondh) tweeted the following links for anyone who wants an #AlphaGo style Monte Carlo Tree Search in Python:

Introduction to Monte Carlo Tree Search by Jeff Bradberry.

Monte Carlo Tree Search by Cameron Browne.

Jeff’s post is your guide to Monte Carlo Tree Search in Python while Cameron’s site bills itself as:

This site is intended to provide a comprehensive reference point for online MCTS material, to aid researchers in the field.

I didn’t see any dated later than 2010 on Cameron’s site.

Suggestions for other collections of MCTS material that are more up to date?

March 24, 2016

AI Masters Go, Twitter, Not So Much (Log from @TayandYou?)

Filed under: Artificial Intelligence,Games,Machine Learning,Twitter — Patrick Durusau @ 8:30 pm

Microsoft deletes ‘teen girl’ AI after it became a Hitler-loving sex robot within 24 hours by Helena Horton.

From the post:

A day after Microsoft introduced an innocent Artificial Intelligence chat robot to Twitter it has had to delete it after it transformed into an evil Hitler-loving, incestual sex-promoting, ‘Bush did 9/11’-proclaiming robot.

Developers at Microsoft created ‘Tay’, an AI modelled to speak ‘like a teen girl’, in order to improve the customer service on their voice recognition software. They marketed her as ‘The AI with zero chill’ – and that she certainly is.

The headline was suggested to me by a tweet from Peter Seibel:

Interesting how wide the gap is between two recent AI: AlphaGo and TayTweets. The Turing Test is *hard*. http://gigamonkeys.com/turing/.

In preparation for the next AI celebration, does anyone have a complete log of the tweets from Tay Tweets?

I prefer non-revisionist history where data doesn’t disappear. You can imagine the use Stalin would have made of that capability.

March 14, 2016

Project AIX: Using Minecraft to build more intelligent technology

Filed under: Artificial Intelligence,Games,Machine Learning — Patrick Durusau @ 2:16 pm

Project AIX: Using Minecraft to build more intelligent technology by Allison Linn.

From the post:

In the airy, loft-like Microsoft Research lab in New York City, five computer scientists are spending their days trying to get a Minecraft character to climb a hill.

That may seem like a pretty simple job for some of the brightest minds in the field, until you consider this: The team is trying to train an artificial intelligence agent to learn how to do things like climb to the highest point in the virtual world, using the same types of resources a human has when she learns a new task.

That means that the agent starts out knowing nothing at all about its environment or even what it is supposed to accomplish. It needs to understand its surroundings and figure out what’s important – going uphill – and what isn’t, such as whether it’s light or dark. It needs to endure a lot of trial and error, including regularly falling into rivers and lava pits. And it needs to understand – via incremental rewards – when it has achieved all or part of its goal.

“We’re trying to program it to learn, as opposed to programming it to accomplish specific tasks,” said Fernando Diaz, a senior researcher in the New York lab and one of the people working on the project.

The research project is possible thanks to AIX, a platform developed by Katja Hofmann and her colleagues in Microsoft’s Cambridge, UK, lab and unveiled publicly on Monday. AIX allows computer scientists to use the world of Minecraft as a testing ground for conducting research designed to improve artificial intelligence.

The project is in closed beta now but said to be going open source in the summer of 2016.

Someone mentioned quite recently the state of documentation on Minecraft. Their impression was there is a lot of information but poorly organized.

If you are interested in exploring Minecraft for the release this summer, see: How to Install Minecraft on Ubuntu or Any Other Linux Distribution.

PyGame, Pong and Tensorflow

Filed under: Games,Machine Learning — Patrick Durusau @ 10:50 am

Daniel Slater has a couple of posts of interest to AI game followers:

How to run learning agents against PyGame

Deep-Q learning Pong with Tensorflow and PyGame

If you like low-end video games… 😉

Seriously, the principles here can be applied to more complex situations and video games.

Enjoy!

Lee Sedol “busted up” AlphaGo – Game 4

Filed under: Artificial Intelligence,Games,Machine Learning — Patrick Durusau @ 9:43 am

Lee Sedol defeats AlphaGo in masterful comeback – Game 4 by David Ormerod.

From the post:

Expectations were modest on Sunday, as Lee Sedol 9p faced the computer Go program AlphaGo for the fourth time.

Lee Sedol 9 dan, obviously relieved to win his first game.

After Lee lost the first three games, his chance of winning the five game match had evaporated.

His revised goal, and the hope of millions of his fans, was that he might succeed in winning at least one game against the machine before the match concluded.

However, his prospects of doing so appeared to be bleak, until suddenly, just when all seemed to be lost, he pulled a rabbit out of a hat.

And he didn’t even have a hat!

Lee Sedol won game four by resignation.

A reversal of roles but would you say that Sedol “busted up” AlphaGo?

Looking forward to the results of Game 5!

February 28, 2016

Writing Games with Emacs

Filed under: Emacs,Games,Lisp — Patrick Durusau @ 8:44 pm

Writing Games with Emacs by Zachary Kanfer. (video)

From the description:

Games are a great way to get started writing programs in any language. In Emacs Lisp, they’re even better—you use the same exact techniques to extend Emacs, configuring it to do what you want. In this presentation, Zachary Kanfer livecodes tic-tac-toe. You’ll see how to create a basic major mode, make functions, store state, and set keybindings.

You can grab the source code at: zck.me.

Ready to build some muscle memory?

February 11, 2016

The Hitchhiker’s Guide to the Galaxy Game – 30th Anniversary Edition

Filed under: Games,Humor — Patrick Durusau @ 8:44 pm

The Hitchhiker’s Guide to the Galaxy Game – 30th Anniversary Edition

From the webpage:

A word of warning

The game will kill you frequently. It’s a bit mean like that.

If in doubt, before you make a move, please save your game by typing “Save” then enter. You can then restore your game by typing “Restore” then enter. This should make it slightly less annoying getting killed all the time as you can go back to where you were before it happened.

You’ll need to be signed in for this to work. You can sign in or register by clicking the BBCiD icon next to the BBC logo in the top navigation bar. Signing in will also allow you to tweet about your achievements, and to add a display name so you can get onto the high score tables.

Take fair warning, you can lose hours if not days playing this game.

The graphics may help orient yourself in the various locations. That was missing in the original game.

If you maintain focus on the screen, you can use your keyboard for data entry.

Graphics are way better now but how do you compare the game play to current games?

Enjoy!

More details:

About the game

Game Technical FAQ

How to play

February 3, 2016

Cheating Cheaters [Honeypots for Government Agencies?]

Filed under: Cybersecurity,Games — Patrick Durusau @ 4:36 pm

Video Game Cheaters Outed By Logic Bombs by timothy.

From the post:

A Reddit user decided to tackle the issue of cheaters within Valve’s multiplayer shooter Counter Strike: Global Offensive in their own unique way: by luring them towards fake “multihacks” that promised a motherlode of cheating tools, but in reality, were actually traps designed to cause the users who installed them to eventually receive bans. The first two were designed as time bombs, which activated functions designed to trigger bans after a specific time of day. The third, which was downloaded over 3,500 times, caused instantaneous bans.

I wonder if anyone is running honeypots for intelligence agencies?

Or fake jihad sites for our friends in law enforcement?

Sort of a Spy vs. Spy situation, yes?

spion-mot-spion

Cyber-dueling with government before you aren’t wearing protective gear and the tips aren’t blunted.

December 4, 2015

[A] Game-Changing Go Engine (Losing mastery over computers? Hardly.)

Filed under: Artificial Intelligence,Games — Patrick Durusau @ 4:22 pm

How Facebook’s AI Researchers Built a Game-Changing Go Engine

From the post:

One of the last bastions of human mastery over computers is the game of Go—the best human players beat the best Go engines with ease.

That’s largely because of the way Go engines work. These machines search through all possible moves to find the strongest.

While this brute force approach works well in draughts and chess, it does not work well in Go because of the sheer number of possible positions on a board. In draughts, the number of board positions is around 10^20; in chess it is 10^60.

But in Go it is 10^100—that’s significantly more than the number of particles in the universe. Searching through all these is unfeasible even for the most powerful computers.

So in recent years, computer scientists have begun to explore a different approach. Their idea is to find the most powerful next move using a neural network to evaluate the board. That gets around the problem of searching. However, neural networks have yet to match the level of good amateur players or even the best search-based Go engines.

Today, that changes thanks to the work of Yuandong Tian at Facebook AI Research in Menlo Park and Yan Zhu at Rutgers University in New Jersey. These guys have combined a powerful neural network approach with a search-based machine to create a Go engine that plays at an impressively advanced level and has room to improve.

The new approach is based in large part on advances that have been made in neural network-based machine learning in just the last year or two. This is the result of a better understanding of how neural networks work and the availability of larger and better databases to train them.

This is how Tian and Zhu begin. They start with a database of some 250,000 real Go games. They used 220,000 of these as a training database. They used the rest to test the neural network’s ability to predict the next moves that were played in real games.

If you want the full details, check out:

Better Computer Go Player with Neural Network and Long-term Prediction by Yuandong Tian, Yan Zhu.

Abstract:

Competing with top human players in the ancient game of Go has been a long-term goal of artificial intelligence. Go’s high branching factor makes traditional search techniques ineffective, even on leading-edge hardware, and Go’s evaluation function could change drastically with one stone change. Recent works [Maddison et al. (2015); Clark & Storkey (2015)] show that search is not strictly necessary for machine Go players. A pure pattern-matching approach, based on a Deep Convolutional Neural Network (DCNN) that predicts the next move, can perform as well as Monte Carlo Tree Search (MCTS)-based open source Go engines such as Pachi [Baudis & Gailly (2012)] if its search budget is limited. We extend this idea in our bot named darkforest, which relies on a DCNN designed for long-term predictions. Darkforest substantially improves the win rate for pattern-matching approaches against MCTS-based approaches, even with looser search budgets. Against human players, darkforest achieves a stable 1d-2d level on KGS Go Server, estimated from free games against human players. This substantially improves the estimated rankings reported in Clark & Storkey (2015), where DCNN-based bots are estimated at 4k-5k level based on performance against other machine players. Adding MCTS to darkforest creates a much stronger player: with only 1000 rollouts, darkforest+MCTS beats pure darkforest 90% of the time; with 5000 rollouts, our best model plus MCTS beats Pachi with 10,000 rollouts 95.5% of the time.

The author closes with this summary:

This kind of research is still in its early stages, so improvements are likely in the near future. It may be that humans are about to lose their mastery over computers in yet another area.

I may have to go read the article again because the program as described:

  • Did not invent the game of Go or any of its rules.
  • Did not play any of the 220,000 actual Go games used for training.

That is to say that the Game of Go was invented by people and people playing Go supplied the basis for this Go playing computer.

Not to take anything away from the program or these researchers, but humans are hardly about to lose “…mastery over computers in yet another area.”

Humans remain the creators of such games, the source of training data and the measure against which the computer measures itself.

Who do you think is master in such a relationship?*

* Modulo that the DHS wants to make answers from computers to be the basis for violating your civil liberties. But that’s a different type of “mastery” issue.

November 13, 2015

Bytes that Rock! Software Awards 2015 (Nominations Open Now – Close 16th November 2015)

Filed under: Blogs,Contest,Games,Software — Patrick Durusau @ 2:38 pm

Bytes that Rock! Software Awards 2015 (Nominations Open Now – Close 16th November 2015)

An awards program for excellence in software and blogs!

The only limitation I could find is:

Bytes that Rock recognizes the best software and blogs for their excellence in the past 12 months.

Your game/software/blog may have been excellent three (3) years ago but that doesn’t count. 😉

Subject to that mild limitation, step up and:

Submit a blog, software or game clicking on the categories below!

Software blogs
VideoGame blogs
Security blogs

PC Software
Software UI
Innovative Software
Protection Software
Open Source Software

PC Games
Indie Games
Mods for games

This is not a next week, or after I ask X, or when I get home task.

This is a hit a submit link now task!

You will feel better after having made a nomination. Promise. 😉

BTR_1

(Select the graphic for a much larger version of the image.)

November 2, 2015

Visualizing Chess Data With ggplot

Filed under: Games,Ggplot2,R,Visualization — Patrick Durusau @ 11:33 am

Visualizing Chess Data With ggplot by Joshua Kunst.

Sales of traditional chess sets peak during the holiday season. The following graphic does not include sales of chess gadgets, chess software, or chess computers:

trends-081414-weeklydollar

(Source: Terapeak Trends: Which Tabletop Games Sell Best on eBay? by Aron Hsiao.)

Joshua’s post is a guide to using and visualizing chess data under the following topics:

  1. The Data
  2. Piece Movements
  3. Survival rates
  4. Square usage by player
  5. Distributions for the first movement
  6. Who captures whom

Joshua is using public chess data but it’s just a short step to using data from your own chess games or those of friends from your local chess club. 😉

Visualize the play of openings, defenses, players + openings/defenses, you are limited only by your imagination.

Give a chess friend a visualization they can’t buy in any store!

PS: Check out: rchess a Chess Package for R also by Joshua Kunst.

I first saw this in a tweet by Christophe Lalanne.

August 21, 2015

Parens of the Dead

Filed under: Clojure,ClojureScript,Functional Programming,Games,Programming — Patrick Durusau @ 7:43 pm

Parens of the Dead: A screencast series of zombie-themed games written with Clojure and ClojureScript.

Three episodes posted thus far:

Episode 1: Lying in the Ground

Starting with an empty folder, we’ll lay the technical groundwork for our game. We’ll get a Clojure web server up and running, compiling and sending ClojureScript to the browser.

Episode 2: Frontal Assualt

In this one, we create most of the front-end code. We take a look at the data structure that describes the game, using that to build up our UI.

Episode 3: What Lies Beneath

The player has only one action available; revealing a tile. We’ll start implementing the central ‘reveal-tile’ function on the backend, writing tests along the way.

Next episode? Follow @parensofthedead

Another innovative instruction technique!

Suggestions:

1) Have your volume control available because I found the sound in the screencasts to be very soft.

2) Be prepared to move very quickly as episode one, for example, is only eleven minutes long.

3) Download the code and walk through it at a slower pace.

Enjoy!

March 6, 2015

Chinese Tradition Inspires Machine Learning Advancements, Product Contributions

Filed under: Games,Machine Learning — Patrick Durusau @ 2:43 pm

Chinese Tradition Inspires Machine Learning Advancements, Product Contributions by George Thomas Jr..

From the post:

A new online Chinese riddle game is steeped in more than just tradition. In fact, the machine learning and artificial intelligence that fuels it derives from years of research that also helps drive Bing Search, Bing Translator, the Translator App for Windows Phone, and other products.

Launched in celebration of the Chinese New Year, Microsoft Chinese Character Riddle is based on the two-player game unique to Chinese traditional culture and part of the Chinese Lantern Festival. Developed by the Natural Language Computing Group in the Microsoft Research's Beijing lab, the game not only quickly returns an answer to a user's riddle, but also works in reverse: when a user enters a single Chinese character as the intended answer, the system generates several riddles from which to choose.

"These innovations typically embody the strategic thought of Natural Language Processing 2.0, which is to collect big data on the Internet, to automatically build AI models using statistical machine learning methods, and to involve users in the innovation process by quickly getting their on-line feedback." Says Dr. Ming Zhou, Group Leader for Natural Language Computing Group and Principal Researcher at Microsoft Research Asia. "Thus the riddle system will continue to improve."

I don’t know any Chinese characters at all so others will need to judge the usefulness of this machine learning version. I did find a general resource on Riddles about Chinese Characters.

What other word or riddle games would pose challenges for machine learning?

I first saw this in a tweet by Microsoft Research.

March 1, 2015

Clojure and Overtone Driving Minecraft

Filed under: Clojure,Games,Music — Patrick Durusau @ 4:53 pm

Clojure and Overtone Driving Minecraft by Joseph Wilk.

From the post:

Using Clojure we create interesting 3D shapes in Minecraft to the beat of music generated from Overtone.

We achieve this by embedding a Clojure REPL inside a Java Minecraft server which loads Overtone and connects to an external Supercollider instance (What Overtone uses for sound).

Speaking of functional programming, you may find this useful.

The graphics and music are impressive!

December 30, 2014

The New Chess World Champion

Filed under: Game Theory,Games — Patrick Durusau @ 8:54 pm

The New Chess World Champion by K W Regan.

From the post:

Larry Kaufman is a Grandmaster of chess, and has teamed in the development of two champion computer chess programs, Rybka and Komodo. I have known him from chess tournaments since the 1970s. He earned the title of International Master (IM) from the World Chess Federation in 1980, a year before I did. He earned his GM title in 2008 by dint of winning the World Senior Chess Championship, equal with GM Mihai Suba.

Today we salute Komodo for winning the 7th Thoresen Chess Engines Competition (TCEC), which some regard as the de-facto world computer chess championship.

Partially computer chess history and present with asides on Shogi, dots-and-boxes, Arimaa, and Go.

Regan forgets to mention that thus far, computers don’t compete at all in the game of thrones. No one has succeeded in teaching a computer to lie. That is knowing the correct answer and for motives of its own, concealing that answer and offering another.

PS:

Komodo (commercial, $59.96)

Stockfish (open source)

December 26, 2014

How to Win at Rock-Paper-Scissors

Filed under: Game Theory,Games,Science,Social Sciences — Patrick Durusau @ 4:40 pm

How to Win at Rock-Paper-Scissors

From the post:

The first large-scale measurements of the way humans play Rock-Paper-Scissors reveal a hidden pattern of play that opponents can exploit to gain a vital edge.

RPSgame

If you’ve ever played Rock-Paper-Scissors, you’ll have wondered about the strategy that is most likely to beat your opponent. And you’re not alone. Game theorists have long puzzled over this and other similar games in the hope of finding the ultimate approach.

It turns out that the best strategy is to choose your weapon at random. Over the long run, that makes it equally likely that you will win, tie, or lose. This is known as the mixed strategy Nash equilibrium in which every player chooses the three actions with equal probability in each round.

And that’s how the game is usually played. Various small-scale experiments that record the way real people play Rock-Paper-Scissors show that this is indeed the strategy that eventually evolves.

Or so game theorists had thought… (emphasis added)

No, I’m not going to give away the answer!

I will only say the answer isn’t what has been previously thought.

Why the different answer? Well, the authors speculate (with some justification) that the smallness of prior experiments resulted in the non-exhibition of a data pattern that was quite obvious when done on a larger scale.

Given that N < 100 in so many sociology, psychology, and other social science experiments, the existing literature offers a vast number of opportunities where repeating small experiments on large scale could produce different results. If you have any friends in a local social science department, you might want to suggest this to them as a way to be on the front end of big data in social science. PS: If you have access to a social science index, please search and post a rough count of participants < 100 in some subset of social science journals. Say since 1970. Thanks!

December 20, 2014

Teaching Deep Convolutional Neural Networks to Play Go

Filed under: Deep Learning,Games,Machine Learning,Monte Carlo — Patrick Durusau @ 2:38 pm

Teaching Deep Convolutional Neural Networks to Play Go by Christopher Clark and Amos Storkey.

Abstract:

Mastering the game of Go has remained a long standing challenge to the field of AI. Modern computer Go systems rely on processing millions of possible future positions to play well, but intuitively a stronger and more ‘humanlike’ way to play the game would be to rely on pattern recognition abilities rather then brute force computation. Following this sentiment, we train deep convolutional neural networks to play Go by training them to predict the moves made by expert Go players. To solve this problem we introduce a number of novel techniques, including a method of tying weights in the network to ‘hard code’ symmetries that are expect to exist in the target function, and demonstrate in an ablation study they considerably improve performance. Our final networks are able to achieve move prediction accuracies of 41.1% and 44.4% on two different Go datasets, surpassing previous state of the art on this task by significant margins. Additionally, while previous move prediction programs have not yielded strong Go playing programs, we show that the networks trained in this work acquired high levels of skill. Our convolutional neural networks can consistently defeat the well known Go program GNU Go, indicating it is state of the art among programs that do not use Monte Carlo Tree Search. It is also able to win some games against state of the art Go playing program Fuego while using a fraction of the play time. This success at playing Go indicates high level principles of the game were learned.

If you are going to pursue the study of Monte Carlo Tree Search for semantic purposes, there isn’t any reason to not enjoy yourself as well. 😉

And following the best efforts in game playing will be educational as well.

I take the efforts at playing Go by computer as well as those for chess, as indicating how far ahead humans are to AI.

Both of those two-player, complete knowledge games were mastered long ago by humans. Multi-player games with extended networds of influence and motives, not to mention incomplete information as well, seem securely reserved for human players for the foreseeable future. (I wonder if multi-player scenarios are similar to the multi-body problem in physics? Except with more influences.)

I first saw this in a tweet by Ebenezer Fogus.

Monte-Carlo Tree Search for Multi-Player Games [Semantics as Multi-Player Game]

Filed under: Games,Monte Carlo,Search Trees,Searching,Semantics — Patrick Durusau @ 2:25 pm

Monte-Carlo Tree Search for Multi-Player Games by Joseph Antonius Maria Nijssen.

From the introduction:

The topic of this thesis lies in the area of adversarial search in multi-player zero-sum domains, i.e., search in domains having players with conflicting goals. In order to focus on the issues of searching in this type of domains, we shift our attention to abstract games. These games provide a good test domain for Artificial Intelligence (AI). They offer a pure abstract competition (i.e., comparison), with an exact closed domain (i.e., well-defined rules). The games under investigation have the following two properties. (1) They are too complex to be solved with current means, and (2) the games have characteristics that can be formalized in computer programs. AI research has been quite successful in the field of two-player zero-sum games, such as chess, checkers, and Go. This has been achieved by developing two-player search techniques. However, many games do not belong to the area where these search techniques are unconditionally applicable. Multi-player games are an example of such domains. This thesis focuses on two different categories of multi-player games: (1) deterministic multi-player games with perfect information and (2) multi-player hide-and-seek games. In particular, it investigates how Monte-Carlo Tree Search can be improved for games in these two categories. This technique has achieved impressive results in computer Go, but has also shown to be beneficial in a range of other domains.

This chapter is structured as follows. First, an introduction to games and the role they play in the field of AI is provided in Section 1.1. An overview of different game properties is given in Section 1.2. Next, Section 1.3 defines the notion of multi-player games and discusses the two different categories of multi-player games that are investigated in this thesis. A brief introduction to search techniques for two-player and multi-player games is provided in Section 1.4. Subsequently, Section 1.5 defines the problem statement and four research questions. Finally, an overview of this thesis is provided in Section 1.6.

This thesis is great background reading on the use of Monte-Carol tree search in games. While reading the first chapter, I realized that assigning semantics to a token is an instance of a multi-player game with hidden information. That is the “semantic” of any token doesn’t exist in some Platonic universe but rather is the result of some N number of players who also accept a particular semantic for some given token in a particular context. And we lack knowledge of the semantic and the reasons for it that will be assigned by some N number of players, which may change over time and context.

The semiotic triangle of Ogden and Richards (The Meaning of Meaning):

300px-Ogden_semiotic_triangle

for any given symbol, represents the view of a single speaker. But as Ogden and Richards note, what is heard by listeners should be represented by multiple semiotic triangles:

Normally, whenever we hear anything said we spring spontaneously to an immediate conclusion, namely, that the speaker is referring to what we should be referring to were we speaking the words ourselves. In some cases this interpretation may be correct; this will prove to be what he has referred to. But in most discussions which attempt greater subtleties than could be handled in a gesture language this will not be so. (The Meaning of Meaning, page 15 of the 1923 edition)

Is RDF/OWL more subtle than can be handled by a gesture language? If you think so then you have discovered one of the central problems with the Semantic Web and any other universal semantic proposal.

Not that topic maps escape a similar accusation, but with topic maps you can encode additional semiotic triangles in an effort to avoid confusion, at least to the extent of funding and interest. And if you aren’t trying to avoid confusion, you can supply semiotic triangles that reach across understandings to convey additional information.

You can’t avoid confusion altogether nor can you achieve perfect communication with all listeners. But, for some defined set of confusions or listeners, you can do more than simply repeat your original statements in a louder voice.

Whether Monte-Carlo Tree searches will help deal with the multi-player nature of semantics isn’t clear but it is an alternative to repeating “…if everyone would use the same (my) system, the world would be better off…” ad nauseam.

I first saw this in a tweet by Ebenezer Fogus.

December 15, 2014

Deep learning for… chess

Filed under: Amazon Web Services AWS,Deep Learning,Games,GPU — Patrick Durusau @ 5:38 am

Deep learning for… chess by Erik Bernhardsson.

From the post:

I’ve been meaning to learn Theano for a while and I’ve also wanted to build a chess AI at some point. So why not combine the two? That’s what I thought, and I ended up spending way too much time on it. I actually built most of this back in September but not until Thanksgiving did I have the time to write a blog post about it.

Chess sets are a common holiday gift so why not do something different this year?

Pretty print a copy of this post and include a gift certificate from AWS for a GPU instance for say a week to ten days.

I don’t think AWS sells gift certificates, but they certainly should. Great stocking stuffer, anniversary/birthday/graduation present, etc. Not so great for Valentines Day.

If you ask AWS for a gift certificate, mention my name. They don’t know who I am so I could use the publicity. 😉

I first saw this in a tweet by Onepaperperday.

Older Posts »

Powered by WordPress