Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

November 4, 2014

Tabletop Whale’s guide to making GIFs

Filed under: Graphics,Visualization — Patrick Durusau @ 4:16 pm

Tabletop Whale’s guide to making GIFs by Eleanor Lutz.

From the post:

Recently I’ve been getting a lot of emails asking for a tutorial on how to make animations. So this week I put together a quick explanation for anyone who’s interested. I archived it as a link on the menu bar of my website, so it’ll always be easy to find if you need it.

This is just a run-through of my own personal animation workflow, so it’s not a definitive guide or anything. There are plenty of other ways to make animations in Photoshop and other programs.

I’ve never tried making a tutorial about my own work before, so sorry in advance if it’s confusing! Let me know if there’s anything I wrote that didn’t make any sense. I’ll try to fix it if I can (though I probably don’t have room to go into detail about every single Photoshop function I mention).

As you already know, I am graphically challenged but have been trying to improve, with the help of tutorials like this one from Eleanor Lutz.

Most of the steps should transfer directly to Gimp.

If you know of any Gimp specific tutorials on animation or otherwise useful for information visualization, drop me a line.

Mapping Out Lambda Land:…

Filed under: Functional Programming,Programming — Patrick Durusau @ 10:51 am

Mapping Out Lambda Land: An Introduction to Functional Programming by Katie Miller.

From the post:

Anyone who has met me will probably know that I am wildly enthusiastic about functional programming (FP). I co-founded a group for women in FP, have presented a series of talks and workshops about functional concepts, and have even been known to create lambda-branded clothing and jewellery. In this blog post, I will try to give some insight into what the fuss is about. I will briefly explain what functional programming is, why you should care, and how you can use OpenShift to learn more about FP.

Good introduction to functional programming and resources on using OpenShift to learn FP.

Just in case you don’t recognize the name, Katie is the author of Lingo of Lamda Land, a poem depending on who is reading it.

November 3, 2014

neo4apis

Filed under: Graphs,Neo4j,Tweets — Patrick Durusau @ 9:09 pm

neo4apis by Brian Underwood.

From the post:

I’ve been reading a few interesting analyses of Twitter data recently such as this #gamergate analysis by Andy Baio. I thought it would be nice to have a mechanism for people to quickly and easily import data from Twitter to Neo4j for research purposes. Like a good programmer I had to go up at least one level of abstraction. Thus was born the ruby gems neo4apis and neo4apis-twitter (and, incidentally, neo4apis-github just to prove it was repeatable).

Using the neo4apis-twitter gem is easy and can be used either in your ruby code or from the command line. neo4apis takes care of loading your data efficiently as well as creating database indexes so that you can query it effectively.

In case you haven’t heard, the number of active Twitter users is estimated at 228 million. That is a lot of users but as I write this post, the world’s population passed 7,271,955,000.

Just doing rough numbers, 7,271,955,000 / 228,000,000 = 31.

So if you captured a tweet from every active twitter user, that would be 1/31 of the world’s population.

Not saying you shouldn’t capture tweets or analyze them in Neo4j. I am saying that you should be mindful of the lack of representation in such tweets.

Using Apache Spark and Neo4j for Big Data Graph Analytics

Filed under: BigData,Graphs,Hadoop,HDFS,Spark — Patrick Durusau @ 8:29 pm

Using Apache Spark and Neo4j for Big Data Graph Analytics by Kenny Bastani.

From the post:


Fast and scalable analysis of big data has become a critical competitive advantage for companies. There are open source tools like Apache Hadoop and Apache Spark that are providing opportunities for companies to solve these big data problems in a scalable way. Platforms like these have become the foundation of the big data analysis movement.

Still, where does all that data come from? Where does it go when the analysis is done?

Graph databases

I’ve been working with graph database technologies for the last few years and I have yet to become jaded by its powerful ability to combine both the transformation of data with analysis. Graph databases like Neo4j are solving problems that relational databases cannot.

Graph processing at scale from a graph database like Neo4j is a tremendously valuable power.

But if you wanted to run PageRank on a dump of Wikipedia articles in less than 2 hours on a laptop, you’d be hard pressed to be successful. More so, what if you wanted the power of a high-performance transactional database that seamlessly handled graph analysis at this scale?

Mazerunner for Neo4j

Mazerunner is a Neo4j unmanaged extension and distributed graph processing platform that extends Neo4j to do big data graph processing jobs while persisting the results back to Neo4j.

Mazerunner uses a message broker to distribute graph processing jobs to Apache Spark’s GraphX module. When an agent job is dispatched, a subgraph is exported from Neo4j and written to Apache Hadoop HDFS.

Mazerunner is an alpha release with page rank as its only algorithm.

It has a great deal of potential so worth your time to investigate further.

Suppressing Authentic Information

Filed under: News,Reporting,Skepticism — Patrick Durusau @ 8:14 pm

In my continuing search for information on the authenticity of Dabiq (see: Dabiq, ISIS and Data Skepticism) I encountered Slick, agile and modern – the IS media machine by Mina Al-Lami.

Mina makes it clear that IS (ISIL/ISIS) has been the target of a campaign to shut down all authentic outlets for news from the group:


IS has always relied heavily on hordes of online supporters to amplify its message. But their role has become increasingly important in recent months as the group’s official presence on a variety of social media platforms has been shut down and moved underground.

The group’s ability to keep getting its message out in the face of intensive counter-measures is due to the agility, resilience and adaptability of this largely decentralized force.

Until July this year, IS, like most jihadist groups, had a very strong presence on Twitter, with all its central and regional media outlets officially active on the platform. However, its military successes on the ground in Iraq and Syria in June triggered a concerted and sustained clampdown on the group’s accounts.

IS was initially quick to replace these accounts, in what became a game of whack-a-mole between IS and the Twitter administration. But by July the group appeared to have abandoned any attempt to maintain an official open presence there.

Instead, IS began experimenting with a string of less known social media platforms. These included the obscure Friendica, Quitter and Diaspora – all of which promise better privacy and data-protection than Twitter – as well as the popular Russian VKontakte.

Underground channels

While accounts on Friendica and Quitter were shut down within days, the official IS presence on Diaspora and VKontakte lasted several weeks before their involvement in the distribution of high profile beheading videos caused them too to be shut down.

Since the accounts on VKontakte were closed in September, IS appears to have resorted to underground channels to surface its material, making no attempt to advertise an official social media presence. Perhaps surprisingly, this has not yet caused any problems for the group in terms of authenticating its output.

Once a message has surfaced – via channels that are currently difficult to pin down – it is disseminated by loosely affiliated media groups who are capable of mobilizing a vast network of individual supporters on social media to target specific audiences.

Unfortunately, Mina misses the irony of reporting that IS has no authentic outlets in one breath to relying in the next breath on non-authentic materials (such as Dabiq) to talk about the group’s social media prowess.

Suppression of authentic content outlets for IS leaves an interested reader at the mercy of governments, news organizations and others who have a variety of motives for attributing content to IS.

As I mentioned in my last post:

Debates about international and national policy should not be based on faked evidence (such as “yellow cake uranium“) or faked publications.

I have heard the argument that IS content recruits support for terrorism. I have read propaganda attributed to IS, the Khmer Rouge, the KKK and terrorists sponsored by Western governments. I can report not the slightest interest in supporting or participating with any of them.

The recruitment argument is a variation of the fear of allowing gays, drug use, drinking, etc., on television would result in children growing up to be gay drug addicts with drinking problems. I can report that no sane person credits that fear today. (If you have that fear, contact your local mental health service for an appointment.)

Why is IS attractive? Hard to say given the lack of authentic information on its goals and platform, perhaps its reported opposition to corrupt governments in the Middle East?

If I weren’t concerned with corrupt Western governments I might be more concerned with governments in the Middle East. But, as they say, best to start cleaning your own house before complaining about the state of another’s.

November 2, 2014

Kylin

Filed under: Kylin,OLAP,SQL — Patrick Durusau @ 8:33 pm

Open Source Distributed Analytics Engine with SQL interface and OLAP on Hadoop by eBay – Kylin by Avkash Chauhan.

From the post:

Key Features:

  • Extremely Fast OLAP Engine at Scale:
    • Kylin is designed to reduce query latency on Hadoop for 10+ billions of rows of data
  • ANSI-SQL Interface on Hadoop:
    • Kylin offers ANSI-SQL on Hadoop and supports most ANSI-SQL query functions
  • Interactive Query Capability:
    • Users can interact with Hadoop data via Kylin at sub-second latency, better than Hive queries for the same dataset
  • MOLAP Cube:
    • User can define a data model and pre-build in Kylin with more than 10+ billions of raw data records
  • Seamless Integration with BI Tools:
    • Kylin currently offers integration capability with BI Tools like Tableau.
  • Other Highlights:
    • Job Management and Monitoring
    • Compression and Encoding Support
    • Incremental Refresh of Cubes
    • Leverage HBase Coprocessor for query latency
    • Approximate Query Capability for distinct Count (HyperLogLog)
    • Easy Web interface to manage, build, monitor and query cubes
    • Security capability to set ACL at Cube/Project Level
    • Support LDAP Integration

Find it at Github: https://github.com/KylinOLAP/Kylin

Learn more at: http://www.kylin.io/index.html

More info:

Kylin OLAP Group

Kylin Developer Mail

A useful write-up for an overview of Kylin: Announcing Kylin: Extreme OLAP Engine for Big Data, the blog post from eBay that announces the open sourcing of Kylin.

What caught my eye was the use of pre-calculation of combinations of dimensions using Hadoop. Sounds promising!

The Common Lisp Cookbook

Filed under: Lisp,Programming — Patrick Durusau @ 8:02 pm

The Common Lisp Cookbook

From the webpage:

This is a collaborative project that aims to provide for Common Lisp something similar to the Perl Cookbook published by O’Reilly. More details about what it is and what it isn’t can be found in this thread from comp.lang.lisp.

The credit for finally giving birth to the project probably goes to "dj_special_ed" who posted this message to comp.lang.lisp.

If you want to contribute to the CL Cookbook, you can

  • ask one of the project admins to become a registered developer,
  • submit patches via Sourceforge’s patch tracking system,
  • or simply send stuff (corrections, additions, or even whole chapters) by email.

Yes, we’re talking to you! We need contributors – write a chapter that’s missing and add it, find an open question and provide an answer, find bugs and report them, or just send questions and wait for somebody else to answer them. (If you have no idea what might be missing but would like to help, take a look at the table of contents of the Perl Cookbook. [Updated the link to point to 2nd edition, 2003.]) Don’t worry about the formatting, just send plain text if you like – we’ll take care about that later.

Thanks in advance for your help!

The pages here on Sourceforge’s web server should be fairly up-to-date – they’re automatically checked out of theCVS repository once per day. You can also download a nightly CVS tarball for offline browsing. More info, including mailing list(s), can be found at the Sourceforge project page. There’s also a CHANGELOG available.

Is the 2nd edition of the Perl Cookbook really eleven (11) years old? What needs to be added to that toc for Common Lisp?

I first saw this in a tweet by Christophe Lalanne.

Introduction to Basic Legal Citation (online ed. 2014)

Filed under: Identifiers,Law,Law - Sources,Legal Informatics — Patrick Durusau @ 7:34 pm

Introduction to Basic Legal Citation (online ed. 2014) by Peter W. Martin.

From the post:

This work first appeared in 1993. It was most recently revised in the fall of 2014 following a thorough review of the actual citation practices of judges and lawyers, the relevant rules of appellate practice of federal and state courts, and the latest edition of the ALWD Guide to Legal Citation, released earlier in the year. As has been true of all editions released since 2010, it is indexed to both the ALWD guide and the nineteenth edition of The Bluebook. However, it also documents the many respects in which contemporary legal writing, very often following guidelines set out in court rules, diverges from the citation formats specified by those academic texts.

The content of this guide is also available in three different e-book formats: 1) a pdf version that can be printed out in whole or part and also used with hyperlink navigation on an iPad or other tablet, indeed, on any computer; 2) a version designed specifically for use on the full range of Kindles as well as other readers or apps using the Mobi format; and 3) a version in ePub format for the Nook and other readers or apps that work with it. To access any of them, click here. (Over 50,000 copies of the 2013 edition were downloaded.)

Since the guide is online, its further revision is not tied to a rigid publication cycle. Any user seeing a need for clarification, correction, or other improvement is encouraged to “speak up.” What doesn’t work, isn’t clear, is missing, appears to be in error? Has a change occurred in one of the fifty states that should be reported? Comments of these and other kinds can sent by email addressed to peter.martin@cornell.edu. (Please include “Citation” in the subject line.) Many of the features and some of the coverage of this reference are the direct result of past user questions and advice.

A complementary series of video tutorials offers a quick start introduction to citation of the major categories of legal sources. They may also be useful for review. Currently, the following are available:

  1. Citing Judicial Opinions … in Brief (8.5 minutes)
  2. Citing Constitutional and Statutory Provisions … in Brief (14 minutes)
  3. Citing Agency Material … in Brief (12 minutes)

Finally, for those with an interest in current issues of citation practice, policy, and instruction, there is a companion blog, “Citing Legally,” at: http://citeblog.access-to-law.com.

Obviously legal citations are identifiers but Peter helpfully expands on the uses of legal citations:

A reference properly written in “legal citation” strives to do at least three things, within limited space:

  • identify the document and document part to which the writer is referring
  • provide the reader with sufficient information to find the document or document part in the sources the reader has available (which may or may not be the same sources as those used by the writer), and
  • furnish important additional information about the referenced material and its connection to the writer’s argument to assist readers in deciding whether or not to pursue the reference.

I would quibble with Peter’s description of a legal citation “identif[ing] a document or document part,” in part because of his second point, that a reader can find an alternative source for the document.

To me it is easier to say that legal citation identifies a legal decision, legislation or agency decision/rule, which may be reported by any number of sources. Some sources have their own unique references systems that are mapped to other systems. Making a legal decision, legislation or agency decision/rule an abstraction identified by the citation, avoids confusion with a particular source.

A must read for law students, practitioners, judges and potential inventors of the Nth citation system for legal materials.

“Trust” Model Broken?

Filed under: Cybersecurity,Security — Patrick Durusau @ 2:36 pm

Financial Breaches Show ‘Trust Model’ Is Broken by Bob West.

From the post:

The one thing the seemingly never-ending string of security breaches highlights is the fact that the current online trust model as we know it is broken. The security compromises at JPMorgan Chase, Home Depot, Dairy Queen, and elsewhere are proof that it is time for industry stakeholders to go back to the drawing board. Clearly, the old model of throwing resources at perimeter defenses, sticking in a few intrusion and anomaly detection tools, patching, and praying is not working.

It’s bad enough when major retailers like Home Depot get compromised. It’s much worse when JPMorgan Chase, the nation’s largest bank, says intruders were able to break into its systems and steal data on a staggering 83 million consumer and commercial accounts. Having served as the Chief Information Security Officer at Fifth Third Bank and Bank One, respectively in Cincinnati and Columbus, Ohio, I can speak from personal experience. It’s a full-blown crisis when more than a dozen major financial services companies admit to having their networks being probed for weaknesses by the same attackers as those behind the Chase breach. This reflects the increasing technical sophistication and the audacity of those behind these attacks.
…(emphasis added)

I mention Bob’s post because it isn’t clear, at least to me, what “…current online trust model….” he is talking about? I did some lite searching and found any number of papers, posts, emails, etc., on trust models. So far as I could tell, none of them qualified as “…current online trust model….”

It there some commonly accepted online trust model that I have overlooked?

I ask because if there is no common notion of online trust model, misunderstandings and disappointments in security discussions are sure to follow.

Where would you start to define an online trust model? What are the important models to map one to another?

November 1, 2014

Guess the Manuscript XVI

Filed under: British Library,Image Processing,Image Recognition,Image Understanding — Patrick Durusau @ 7:55 pm

Guess the Manuscript XVI

From the post:

Welcome to the sixteenth instalment of our popular Guess the Manuscript series. The rules are simple: we post an image of part of a manuscript that is on the British Library’s Digitised Manuscripts site, you guess which one it’s taken from!

bl mss XVI

Are you as surprised as we are to find an umbrella in a medieval manuscript? The manuscript from which this image was taken will feature in a blogpost in the near future.

In the meantime, answers or guesses please in the comments below, or via Twitter @BLMedieval.

Caution! The Medieval Period lasted from five hundred (500) C.E. until fifteen hundred (1500) C.E. Google NGrams records the first use of “umbrella” at or around sixteen-sixty (1660). Is this an “umbrella” or something else?

Using Google’s reverse image search found only repostings of the image search challenge, no similar images. Not sure that helps but was worth a try.

On the bright side, there are only two hundred and fifty-seven (257) manuscripts in the digitized collection dated between five hundred (500) C.E. until fifteen hundred (1500) C.E.

What stories or information can be found in those volumes that might be accompanied by such an image? Need to create a list of the classes of those manuscripts.

Suggestions? Is there an image processor in the house?

Enjoy!

Extracting SVO Triples from Wikipedia

Filed under: SVO,Wikipedia — Patrick Durusau @ 7:27 pm

Extracting SVO Triples from Wikipedia by Sujit Pal.

From the post:

I recently came across this discussion (login required) on LinkedIn about extracting (subject, verb, object) (SVO) triples from text. Jack Park, owner of the SolrSherlock project, suggested using ReVerb to do this. I remembered an entertaining Programming Assignment from when I did the Natural Language Processing Course on Coursera, that involved finding spouse names from a small subset of Wikipedia, so I figured I it would be interesting to try using ReVerb against this data.

This post describes that work. As before, given the difference between this and the “preferred” approach that the automatic grader expects, results are likely to be wildly off the mark. BTW, I highly recommend taking the course if you haven’t already, there are lots of great ideas in there. One of the ideas deals with generating “raw” triples, then filtering them using known (subject, object) pairs to find candidate verbs, then turning around and using the verbs to find unknown (subject, object) pairs.

So in order to find the known (subject, object) pairs, I decided to parse the Infobox content (the “semi-structured” part of Wikipedia pages). Wikipedia markup is a mini programming language in itself, so I went looking for some pointers on how to parse it (third party parsers or just ideas) on StackOverflow. Someone suggested using DBPedia instead, since they have already done the Infobox extraction for you. I tried both, and somewhat surprisingly, manually parsing Infobox gave me better results in some cases, so I describe both approaches below.

As Sujit points out, you will want to go beyond Wikipedia with this technique but it is a good place to start!

If somebody does leak the Senate Report on CIA Torture, that would be a great text (hopefully the full version) to mine with such techniques.

Remembering that anonymity = no accountability.

Dive Into NLTK

Filed under: NLTK,Python — Patrick Durusau @ 7:17 pm

Dive Into NLTK Part I: Getting Started with NLTK

From the post:

NLTK is the most famous Python Natural Language Processing Toolkit, here I will give a detail tutorial about NLTK. This is the first article in a series where I will write everything about NLTK with Python, especially about text mining and text analysis online.

This is the first article in the series “Dive Into NLTK”, here is an index of all the articles in the series that have been published to date:

Part I: Getting Started with NLTK (this article)
Part II: Sentence Tokenize and Word Tokenize
Part III: Part-Of-Speech Tagging and POS Tagger
Part IV: Stemming and Lemmatization
Part V: Using Stanford Text Analysis Tools in Python
Part VI: Add Stanford Word Segmenter Interface for Python NLTK
Part VII: A Preliminary Study on Text Classification

Kudos for the refreshed index at the start of each post. Ease of navigation is a plus!

Have you considered subjecting your “usual” reading to NLTK? That is rather than analyzing a large corpus, what about the next CS article you are meaning to read?

The most I have done so far is to build concordances for standard drafts, mostly to catch bad keyword usage and misspelling. There is a lot more that could be done. Suggestions?

Enjoy this series!

Querying Graphs with Neo4j [cheatsheet]

Filed under: Cypher,Neo4j — Patrick Durusau @ 6:13 pm

Querying Graphs with Neo4j by Michael Hunger.

Download the refcard by usual process, login into Dzone, etc.

When you open the PDF file in a viewer, do be careful. (Page references are to the DZone cheatsheet.)

Cover The entire cover is a download link. Touch it at all and you will be taken to a download link for Neo4j.

Page 1 covers “What is a Graph Database?” and “What is Neo4j?,” just in case you have been forced by home invaders to download a refcard for a technology you know nothing about.

Page 2 pitches the Neo4j server and then Getting Started with Neo4j, perhaps to annoy the NSA with repetitive content.

The DZone cheatsheet replicates the cheatsheet at: http://neo4j.com/docs/2.0/cypher-refcard/, with the following changes:

Page 3

WITH

Re-written. Old version:

MATCH (user)-[:FRIEND]-(friend) WHERE user.name = {name} WITH user, count(friend) AS friends WHERE friends > 10 RETURN user

The WITH syntax is similar to RETURN. It separates query parts explicitly, allowing you to declare which identifiers to carry over to the next part.

MATCH (user)-[:FRIEND]-(friend) WITH user, count(friend) AS friends ORDER BY friends DESC SKIP 1 LIMIT 3 RETURN user

You can also use ORDER BY, SKIP, LIMIT with WITH.

New version:

MATCH (user)-[:KNOWS]-(friend) WHERE user.name = {name} WITH user, count(*) AS friends WHERE friends > 10 RETURN user

WITH chains query parts. It allows you to specify which projection of your data is available after WITH.

ou can also use ORDER BY, SKIP, LIMIT and aggregation with WITH. You might have to alias expressions to give them a name.

I leave it to your judgement which version was the clearer.

Page 4

MERGEinserts: typo “{name: {value3}} )” on last line of final example under MERGE.

SETinserts: “SET n += {map} Add and update properties, while keeping existing ones.”

INDEXinserts: “MATCH (n:Person) WHERE n.name IN {values} An index can be automatically used for the IN collection checks.”

Page 5

PATTERNS

changes: “(n)-[*1..5]->(m) Variable length paths.” to “(n)-[*1..5]->(m) Variable length paths can span 1 to 5 hops.”

changes: “(n)-[*]->(m) Any depth. See the performance tips.” to “(n)-[*]->(m) Variable length path of any depth. See performance tips.”

changes: “shortestPath((n1:Person)-[*..6]-(n2:Person)) Find a single shortest path.” to “shortestPath((n1)-[*..6]-(n2))”

COLLECTIONS

changes: “range({first_num},{last_num},{step}) AS coll Range creates a collection of numbers (step is optional), other functions returning collections are: labels, nodes, relationships, rels, filter, extract.” to “range({from},{to},{step}) AS coll Range creates a collection of numbers (step is optional).” [Loss of information from the earlier version.]

inserts: “UNWIND {names} AS name MATCH (n:Person {name:name}) RETURN avg(n.age) With UNWIND, you can transform any collection back into individual rows. The example matches all names from a list of names.”

MAPS

inserts: “range({start},{end},{step}) AS coll Range creates a collection of numbers (step is optional).”

Page 6

PREDICATES

changes: “NOT (n)-[:KNOWS]->(m) Exclude matches to (n)-[:KNOWS]->(m) from the result.” to “NOT (n)-[:KNOWS]->(m) Make sure the pattern has at least one match.” [Older version more precise?]

replaces: mixed case, true/TRUE with TRUE

FUNCTIONS

inserts: “toInt({expr}) Converts the given input in an integer if possible; otherwise it returns NULL.”

inserts: “toFloat({expr}) Converts the given input in a floating point number if possible; otherwise it returns NULL.”

PATH FUNCTIONS

changes: “MATCH path = (begin) -[*]-> (end) FOREACH (n IN rels(path) | SET n.marked = TRUE) Execute a mutating operation for each relationship of a path.” to “MATCH path = (begin) -[*]-> (end) FOREACH (n IN rels(path) | SET n.marked = TRUE) Execute an update operation for each relationship of a path.”

COLLECTION FUNCTIONS

changes: “FOREACH (value IN coll | CREATE (:Person {name:value})) Execute a mutating operation for each element in a collection.” to “FOREACH (value IN coll | CREATE (:Person {name:value})) Execute an update operation for each element in a collection.”

MATHEMATICAL FUNCTIONS

changes: degrees({expr}), radians({expr}), pi() Converts radians into degrees, use radians for the reverse. pi for π.” to “degrees({expr}), radians({expr}), pi() to Converts radians into degrees, use radians for the reverse.” Loses “pi for π.”

changes: “log10({expr}), log({expr}), exp({expr}), e() Logarithm base 10, natural logarithm, e to the power of the parameter. Value of e.” to “log10({expr}), log({expr}), exp({expr}), e() Logarithm base 10, natural logarithm, e to the power of the parameter.” Loses “Value of e.”

Page 7

STRING FUNCTIONS

inserts: “split({string}, {delim}) Split a string into a collection of strings.”

AGGREGATION changes: collect(n.property) Collection from the values, ignores NULL. to “collect(n.property) Value collection, ignores NULL.”

START

remove: “START n=node(*) Start from all nodes.”

remove: “START n=node({ids}) Start from one or more nodes specified by id.”

remove: “START n=node({id1}), m=node({id2}) Multiple starting points.”

remove: “START n=node:nodeIndexName(key={value}) Query the index with an exact query. Use node_auto_index for the automatic index.”

inserts: “START n = node:indexName(key={value}) n=node:nodeIndexName(key={value}) n=node:nodeIndexName(key={value}) Query the index with an exact query. Use node_auto_index for the old automatic index.”

inserts: ‘START n = node:indexName({query}) Query the index by passing the query string directly, can be used with lucene or spatial syntax. E.g.: “name:Jo*” or “withinDistance:[60,15,100]”‘


I may have missed some changes because as you know, the “cheatsheets” for Cypher have no particular order for the entries. Alphabetical order suggests itself for future editions, sans the marketing materials.

Changes to a query language should appear where a user would expect to find the command in question. For example, the “CREATE a={property:’value’} has been removed” should appear where expected on the cheatsheet, noting the change. Users should not have to hunt high and low for “CREATE a={property:’value’}” on a cheatsheet.

I have passed over incorrect use of the definite article and other problems without comment.

Despite the shortcomings of the DZone refcard, I suggest that you upgrade to it.

« Newer Posts

Powered by WordPress