Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

September 27, 2012

Faunus

Filed under: Faunus,Graphs,Networks — Patrick Durusau @ 5:14 pm

Faunus

From the home page:

Faunus is a Hadoop based distributed computing framework for processing property graphs. A breadth-first version of the graph traversal language Gremlin operates on a vertex-centric property graph data structure. Faunus provides adaptors to the distributed graph database Titan, any Rexster fronted graph database, and to text and binary graphs stored in HDFS. The provided Gremlin operations and Hadoop graph tools can be extended using MapReduce and Blueprints.

Warning: Limitation on Vertexes

Faunus Vertex

  • id: a vertex id is a positive long value and therefore, a graph in Faunus can not have more than 9,223,372,036,854,775,807 vertices.
  • properties: the size of the properties map is denoted by a positive short and therefore there can not exist more than 32,767 properties per vertex.
  • edges:

    • unique labels: edges are indexed by their label using a short and therefore, there can not be more than 32,767 unique labels for the incoming (or outgoing) edges of a vertex.
    • total edges: the edge size for any one label is represented by an int and therefore, for any direction and any label, there can not be more than 2,147,483,647 edges.

Warning: Limitation on Edges

Faunus Edge

  • id: an edge id is a positive long value and therefore, a graph in Faunus can not have more than 9,223,372,036,854,775,807 edges.
  • properties: the size of the properties map is denoted by a positive short and therefore there can not exist more than 32,767 properties per edge.

I don’t like putting limitation warnings in my first post on software but thought you needed to be forewarned. šŸ˜‰

Couchbase Java API Cheat Sheet Revisited

Filed under: Couchbase,Java — Patrick Durusau @ 3:46 pm

Couchbase Java API Cheat Sheet Revisited by Don Pinto.

From the post:

With the release of Couchbase Server 2.0 – Beta, I thought Iā€™d take some time to update the Couchbase JAVA API Cheat Sheet I had posted earlier. Couchbase Server 2.0 has a lot of awesome features and the 2.0 compatible Java APIs are available in the Java SDK 1.1 Dev Preview 3.

Whatā€™s new?

  • Lots of new APIs to build and execute queries against views defined in Couchbase Server
  • APIs to specify persistence requirements
  • APIs to specify replication requirements

Hope you find this new cheat sheet helpful. Iā€™ll be happy to know of any cool projects that you create using the new Java API. Or better yet, just share code via your Github account with us and other users.

Would look best with a color printer.

No suggestions so far on topic map cheat sheets.

Maybe I should have asked about “subject” cheat sheets?

The results of analysis/identification/modeling of subjects in public data sets.

Couchbase and Full-text Search: The Couchbase Transport for Elastic Search

Filed under: Couchbase,ElasticSearch,Full-Text Search,Searching — Patrick Durusau @ 3:36 pm

Couchbase and Full-text Search: The Couchbase Transport for Elastic Search

From the post:

Couchbase Server 2.0 adds powerful indexing and querying capabilities through its distributed map reduce implementation. But in addition to that many applications, particularly content applications also need full-text search capabilities. Today we are releasing a developer preview of the Couchbase Transport Plugin for Elastic Search. This plugin uses the new Cross Data Center Replication functionality which will be a part of Couchbase Server 2.0. Using this new transport, you can get started with Couchbase and ElasticSearch easily. This blog explains how you can have this integration up and running in minutes.

There goes the weekend! Already! šŸ˜‰

Searching and Accessing Data in Riak (overview slides)

Filed under: MapReduce,Riak — Patrick Durusau @ 3:22 pm

Searching and Accessing Data in Riak by Andy Gross and Shanley Kane.

From the description:

An overview of methods for searching and aggregating data in Riak, covering Riak Search, secondary indexes and MapReduce. Reviews use cases and features for each method, when to use which, and the limitations and advantages of each approach. In addition, it covers query examples and the high-level architecture of each method.

If you are already familiar with search/access to data in Riak, you won’t find anything new here.

It would be useful to have some topic map specific examples written using Riak.

Sing out if you decide to pursue that train of thought.

Mining Twitter Data with Ruby ā€“ Visualizing User Mentions

Filed under: Graphs,Ruby,Tweets,Visualization — Patrick Durusau @ 3:11 pm

Mining Twitter Data with Ruby ā€“ Visualizing User Mentions by Greg Moreno.

From the post:

In my previous post on mining twitter data with ruby, we laid our foundation for collecting and analyzing Twitter updates. We stored these updates in MongoDB and used map-reduce to implement a simple counting of tweets. In this post, weā€™ll show relationships between users based on mentions inside the tweet. Fortunately for us, there is no need to parse each tweet just to get a list of users mentioned in the tweet because Twitter provides the ā€œentities.mentionsā€ field that contains what we need. After we collected the ā€œwho mentions whoā€, we then construct a directed graph to represent these relationships and convert them to an image so we can actually see it.

Good lesson in paying attention to your data stream.

Can impress your clients with elaborate system for parsing tweets for mentions or you can just use the “entities.mentions” field.

I would rather used the “entities.mentions” field’s content to create linkage to more content. Possibly searched/parsed content.

Question of where you are going to devote your resources.

EdSense:… [Sepulcher or bricks for next silo?]

Filed under: Couchbase,Education,ElasticSearch — Patrick Durusau @ 2:55 pm

EdSense: Building a self-adapting, interactive learning portal with Couchbase by Christopher Tse.

From the description:

Talk from Christopher Tse (@christse), Director of McGraw-Hill Education Labs (MHE Labs), on how to architect a scalable adaptive learning system using a combination of Couchbase 2.0 and ElasticSearch as back-ends. These slides are the presented at CouchConf San Francisco on September 21, 2012.

Code for the proof-of-concept project, called “Learning Portal” has been open sourced and is available via Github at http://github.com/couchbaselabs/learningportal

When you hear about semantic diversity, do you ever think about EdSense, Moodle, EdX, Coursera, etc., as examples of semantic diversity?

And semantic silos?

All content delivery systems are semantic silos.

They have made choices about storage, access and delivery that had semantics. In addition to the semantics of your content.

The question is whether your silo will become a sepulcher for your content or bricks for the next silo in turn.

September 26, 2012

KONVENS2012: The 11th Conference on Natural Language Processing (proceedings)

Filed under: Natural Language Processing — Patrick Durusau @ 4:05 pm

KONVENS2012: The 11th Conference on Natural Language Processing (proceedings) Vienna, September 19-21, 2012

As is usually the case, one find (corpus analysis) leads to another.

In this case a very interesting set of conference proceedings on natural language processing.

Just scanning the titles I see several that will be of interest to topic mappers.

Enjoy!

Using information retrieval technology for a corpus analysis platform

Filed under: Corpora,Corpus Linguistics,Information Retrieval,Lucene,MapReduce — Patrick Durusau @ 3:57 pm

Using information retrieval technology for a corpus analysis platform by Carsten Schnober.

Abstract:

This paper describes a practical approach to use the information retrieval engine Lucene for the corpus analysis platform KorAP, currently being developed at the Institut fĆ¼r Deutsche Sprache (IDS Mannheim). It presents a method to use Luceneā€™s indexing technique and to exploit it for linguistically annotated data, allowing full flexibility to handle multiple annotation layers. It uses multiple indexes and MapReduce techniques in order to keep KorAP scalable.

The support for multiple annotation layers is of particular interest to me because the “subjects” of interest in a text may vary from one reader to another.

Being mindful that for topic maps, the annotation layers and annotations themselves may be subjects for some purposes.

What’s the Current State of Graph Databases?

Filed under: Graphs,Neo4j — Patrick Durusau @ 3:49 pm

What’s the Current State of Graph Databases? by Alex Popescu.

Alex points to an interview of Jim Webber by with Srini Penchikala for InfoQ.

Focuses, as you might expect, on Neo4j and not graph databases in general.

Graph databases have a rich history and a genuine survey would be quite useful.

BBCā€™s Radio 4 on Vagueness in Law

Filed under: Law,Vagueness — Patrick Durusau @ 3:36 pm

BBCā€™s Radio 4 on Vagueness in Law by Adam Wyner.

From the post:

On the BBC Radio 4 Analysis program, there was an episode about the Sorities Paradoxes. These are the sorts of paradoxes that arise about categories that have no sharp boundaries:

One grain of sand is not a heap of sand; two grains of sand are not a heap of sand; …. ; adding one more grain of sand to some sand is not enough to make a heap of sand; yet, at some point, we agree we have a heap of sand.

So, where are the boundaries?

How would you distinguish “lap dancing” from “dancing?”

Highly entertaining! Will look for other relevant episodes.

Panel on Digital Dictionaries (MLA/LSA/ADS)

Filed under: Dictionary,Topic Maps — Patrick Durusau @ 3:13 pm

Panel on Digital Dictionaries (MLA/LSA/ADS) by Ben Zimmer.

From the post:

Eric Baković has noted the happy confluence of the annual meetings of the Linguistic Society of America and the Modern Language Association, both scheduled for January 3-6, 2013 at sites within reasonable walking distance of each other in Boston. (The LSA will be at the Boston Marriott Copley Place, and the MLA at the Hynes Convention Center and the Sheraton Boston.) Eric has plugged the joint organized session on open access for which he will be a panelist, so allow me to do the same for another panel with MLA/LSA crossover appeal. The MLA’s Discussion Group on Lexicography has held a special panel for several years now, but many lexicographers and fellow travelers in linguistics have been unable to attend because of the conflict with the LSA and the concurrent meeting of the American Dialect Society. This time around, with the selected topic of “Digital Dictionaries,” the whole MLA/LSA/ADS crowd can join in.

Interested to hear your thoughts if you are able to attend!

30 Websites To Download Free Vector Images

Filed under: Graphics — Patrick Durusau @ 2:02 pm

30 Websites To Download Free Vector Images

Like the title says, “free” vector images. High quality ones too!

Just in case you are looking to spruce up your website or need vector images for some other purpose.

I first saw this at DZone.

Clojure Mindmap

Filed under: Clojure,Mind Maps — Patrick Durusau @ 1:54 pm

Clojure Mindmap by Siva Jagadeesan.

Impressive graphic but I suspect the task of building it was more instructive than the result.

Something about slowing down enough to write information down and to plot its relationship(s) to other information that makes it “sticky.”

Or it is not so much a question of speed as it is of the effort required to write it down and plot?

Do you remember information you have to look up and then type in a text longer/better than grabbing a quote for a quick cut-n-paste?

I first saw this at DZone.

Splunkā€™s Software Architecture and GUI for Analyzing Twitter Data

Filed under: CS Lectures,Splunk,Tweets — Patrick Durusau @ 1:24 pm

Splunkā€™s Software Architecture and GUI for Analyzing Twitter Data by Marti Hearst.

From the post:

Today we learned about an alternative software architecture for processing large data, getting the technical details from Splunkā€™s VP of Engineering, Stephen Sorkin. Splunk also has a really amazing GUI for analyzing Twitter and other data sources in real time; be sure to watch the last 15 minutes of the video to see the demo:

Someone needs to organize a “big data tool of the month” club!

Or at the rate of current development, would that be a “big data tool of the week” club?

Understanding User Authentication and Authorization in Apache HBase

Filed under: Accumulo,HBase,Security — Patrick Durusau @ 12:57 pm

Understanding User Authentication and Authorization in Apache HBase by Matteo Bertozzi.

From the post:

With the default Apache HBase configuration, everyone is allowed to read from and write to all tables available in the system. For many enterprise setups, this kind of policy is unacceptable.

Administrators can set up firewalls that decide which machines are allowed to communicate with HBase. However, machines that can pass the firewall are still allowed to read from and write to all tables. This kind of mechanism is effective but insufficient because HBase still cannot differentiate between multiple users that use the same client machines, and there is still no granularity with regard to HBase table, column family, or column qualifier access.

In this post, we will discuss how Kerberos is used with Hadoop and HBase to provide User Authentication, and how HBase implements User Authorization to grant users permissions for particular actions on a specified set of data.

When you think about security, remember: Accumulo: Why The World Needs Another NoSQL Database. Accumulo was written to provide cell level security.

Nice idea but the burden of administering cell level authorizations is going to lead to sloppy security practices. Or granting higher level permissions, inadvisedly, to some users.

Not to mention the truck sized security hole in Accumulo for imported data changing access tokens.

You can get a lot of security mileage out of HBase and Kerberos, long before you get to cell level security permissions.

Cypher Query Language and Neo4j (webinar)

Filed under: Cypher,Graphs,Neo4j — Patrick Durusau @ 8:22 am

Cypher Query Language and Neo4j (Registration page)

Thursday, Sept 27, 2012
10:00 PDT // 19:00 CEST

From the description:

The Neo4j graph database is all about relationships. It allows to model domains of connected data easily. Querying using a imperative API is cumbersome and bloated. So the Neo Technology team decided to develop a query language more suited to query graph data. Join us to learn the journey of its inception to a being usable tool.

Curious, has anyone compared Cypher to other graph query languages?

September 25, 2012

Courseraā€™s free online R course starts today

Filed under: Data Analysis,R — Patrick Durusau @ 3:31 pm

Courseraā€™s free online R course starts today by David Smith

From the post:

Coursera offers a number of on-line courses, all available for free and taught by experts in their fields. Today, the course Computing for Data Analysis begins. Taught by Johns Hopkins Biostatistics professor (and co-author of the Simply Statistics blog) Roger Peng, the course will teach you how to program in R and use the language for data analysis. Here’s a brief introduction to the course:

(video omitted)

The course will run for the next 4 week, with a workload of 3-5 hours per week. You can sign up at the link below.

Coursera: Computing for Data Analysis

A day late but you can still register (I just did).

Location Sensitive Hashing in Map Reduce

Filed under: Hadoop,MapReduce — Patrick Durusau @ 3:23 pm

Location Sensitive Hashing in Map Reduce by Ricky Ho.

From the post:

Inspired by Dr. Gautam Shroff who teaches the class: Web Intelligence and Big data in coursera.org, there are many scenarios where we want to compute similarity between large amount of items (e.g. photos, products, persons, resumes … etc). I want to add another algorithm to my Map/Reduce algorithm catalog.

For the background of Map/Reduce implementation on Hadoop. I have a previous post that covers the details.

“Location” here is not used in the geographic sense but as a general measure of distance. Could be geographic, but could be some other measure of location as well.

Twitter’s Scalding and Algebird: Matrix and Lighweight Algebra Library

Filed under: Algebird,Matrix,Scalding — Patrick Durusau @ 3:11 pm

Twitter’s Scalding and Algebird: Matrix and Lighweight Algebra Library by Alex Popescu.

Alex points out:

  1. Scalding now includes a type-safe Matrix API
  2. In the familiar Fields API, weā€™ve added the ability to add type information to fields which allows scalding to pick up Ordering instances so that grouping on almost any scala collection becomes easy.
  3. Algebird is our lightweight abstract algebra library for Scala and is targeted for building aggregation systems (such as Storm).

Of the three, I am going to take a look at Algebird first.

Open Data Cooking: Data Visualization that You Can Eat

Filed under: Graphics,Navigation,Visualization — Patrick Durusau @ 2:49 pm

Open Data Cooking: Data Visualization that You Can Eat

From the post:

The results of the long-awaited Open Data Cooking Workshop [data-cuisine.net] in Helsinki have been posted online. The workshop, organized by some very open-minded visualization fanatics, investigated new ways to represent data through the inherent characteristics of food, such as color, form, texture, smell, taste, nutrition or origin.

The workshop encouraged participants to express data in concrete, sensually experienceable food in order to gain insight into the constructions and relations of media. At the end of the workshop, an open data menu was created and publicly tasted.

I started to skip this post but then remembered eating in the Far East, where items on the menu appear in street windows.

Not text based navigation but navigation none the less.

The Man Behind the Curtain

Filed under: Intelligence,Privacy — Patrick Durusau @ 2:33 pm

The Man Behind the Curtain

From the post:

Without any lead-in whatsoever, we just ask that you watch the video above.

And we ask that you hang on for a few momentsā€”this goes far beyond the hocus pocus youā€™re thinking the clip contains.

You really need to see this video.

Then answer:

Should watchers to watch themselves?

Should people watch the watchers?

conceptClassifier for SharePoint 2010

Filed under: Natural Language Processing,Searching,SharePoint — Patrick Durusau @ 2:12 pm

conceptClassifier for SharePoint 2010 (PDF – White paper on conceptClassifier)

I encountered this white paper in a post at Beyond Search: Concept Searching Enrolls University of California.

Comparison of Sharepoint 2010 to FAST Search and conceptClassifier:

Sharepoint-conceptClassifier-1

Sharepoint-conceptClassifier-2

A comparison to other Sharepoint enhancement tools would be more useful.

Did you see anything particularly remarkable in the listed capabilities?

May not be common for Sharepoint users but auto-tagging of content has been a mainstay of NLP projects for decades.

New Tool: JMXC ā€“ JMX Console

Filed under: Java,Performance — Patrick Durusau @ 1:38 pm

New Tool: JMXC ā€“ JMX Console

From the post:

When you are obsessed with performance and run a performance monitoring service like Sematext does, you need a quick and easy way to inspect Java appsā€™ MBeans in JMX. We just open-sourced JMXC, our 1-class tool for dumping the contents of JMX, or specific MBeans. This is a true and super-simple, no external dependencies console tool that can connect to JMX via Java application PID or via JMX URL and can dump either all MBeans or those specified on the command line.

JMX lives at https://github.com/sematext/jmxc along with other Sematext open-source tools. Feedback and pull requests welcome! Enjoy!

If that sounds a tad cryptic, try reading: Introducing MBeans.

Too good of an opportunity to highlight Sematext’s open source tools to miss.

Battle of the Giants: Apache Solr 4.0 vs ElasticSearch

Filed under: ElasticSearch,Lucene,SolrCloud — Patrick Durusau @ 1:28 pm

Battle of the Giants: Apache Solr 4.0 vs ElasticSearch

From the post:

Apache Solr 4.0 release is imminent and we have a heavily anticipated Solr vs. ElasticSearch blog post series going on. What better time to share that our Rafał Kuć will be giving a talk titled Battle of the giants: Apache Solr 4.0 vs ElasticSearch at the upcoming ApacheCon/Lucene EuroCon in Germany this November.

Abstract:

In this talk audience will be able to hear about how the long awaited Apache Solr 4.0 (aka SolrCloud) compares to the second search engine built on top of Apache Lucene ā€“ ElasticSearch. From understanding the architectural differences and behavior in situations like split ā€“ brain, to cluster recovery. From distributed indexing and document distribution control, to handling multiple shards and replicas in a single cluster. During the talk, we will also compare the most used and anticipated features such as faceting handling, documents grouping and so on. At the end we will talk about performance differences, cluster monitoring and troubleshooting.

ApacheCon Europe 2012
Rhein-Neckar-Arena, Sinsheim, Germany
5ā€“8 November 2012

Email, tweet, publicize ApacheCon Europe 2012!

Blog especially! A pale imitation but those of us unable to attend benefit from your posts!

Announcing TokuDB v6.5: Optimized for Flash [Disambiguation]

Filed under: MariaDB,MySQL,TokuDB — Patrick Durusau @ 1:18 pm

Announcing TokuDB v6.5: Optimized for Flash

Semantic confusion follows me around. Like the harpies that tormented Phineus. Well, maybe not quite that bad. šŸ˜‰

But I see in the news feed that TukoDB v6.5 has been optimized for Flash.

First thought: Why? Who would want a database optimized for Flash?

But they did not mean Flash, or one of the other seventy-five (75) meanings of Flash, but Flash.

I’m glad we had this conversation and cleared that up!

The “Flash” in this case refers to “flash memory.” And so this is an exciting announcement:

We are excited to announce TokuDBĀ® v6.5, the latest version of Tokutek’s flagship storage engine for MySQL and MariaDB.

This version offers optimization for Flash as well as more hot schema change operations for improved agility.

We’ll be posting more details about the new features and performance, so here’s an overview of what’s in store.

Flash
TokuDB v6.5 continues the great Toku-tradition of fast insertions. On flash drives, we show an order-of-magnitude (9x) faster insertion rate than InnoDB. TokuDB’s standard compression works just as well on flash and helps you get the most out of your storage system. And TokuDB reduces wear on solid-state drives by more than an order of magnitude. The full technical details will be subject of a future blog post. In summary though, when TokuDB writes to disk, it updates many rows, whereas InnoDB may write a leaf to disk with a single modified row, in some circumstances. More changes per write means fewer writes, which makes the flash drive wear out much more slowly.

More Hot Schema Changes
TokuDB already has hot column addition, deletion and renaming. In this release we add hot column expansion, so you can change the size of the integers in a column or the number of characters in a field. These operations incurs no down time and the changes are immediately available on the table. In this release, we have also extended hot schema changes to partitioned tables.

Every disambiguation page at www.wikipedia.org, in every language, is testimony to a small part of the need for semantic disambiguation.

Did you know that as of today, there are 218,765 disambiguation pages in Wikipedia? Disambiguation Pages.

How many disambiguations could you use for an index at work, that don’t appear in Wikipedia?

You can stop at ten (10). Point made.

Search Hadoop with Search-Hadoop.com

Filed under: Hadoop,Lucene — Patrick Durusau @ 10:46 am

Search Hadoop with Search-Hadoop.com by Russell Jurney.

As the Hadoop ecosystem has exploded into many projects, searching for the right answers when questions arise can be a challenge. Thats why I was thrilled to hear about search-hadoop.com, from Sematext. It has a sister site called search-lucene where you canā€¦ search lucene!

Class! Class! Pay attention now.

These are examples of value-added services.

Both of these are going on my browser tool bar. How about you?

How to teach Algorithms ?

Filed under: Algorithms — Patrick Durusau @ 10:20 am

How to teach Algorithms ? by Shiva Kintali.

From the post:

The goal is to have a very simple to understand ā€œexecutable pseudo-codeā€ along with an animation framework that ā€œunderstandsā€ this language. So I started designing a new language and called it Kintali language, for lack of a better word šŸ™‚ . I borrowed syntax from several pseudo-codes. It took me almost two years to implement all the necessary features keeping in mind a broad range of algorithms. I developed an interpreter to translate this language into an intermediate representation with callbacks to an animation library. This summer, I finally implemented the animation library and the front-end in Objective-C. The result is the Algorithms App for iPad, released on Sep 20, 2012. This is my attempt to teach as many algorithms as possible by intuitive visualization and friendly exercises.

I like the idea of an “executable pseudo-code,” but I have this nagging feeling this has been done before.

Yes?

If I had a topic map of algorithm classes, textbooks and teaching aids I would know the answer right away.

But I don’t.

Are you aware of similar projects?

September 24, 2012

Foundation grants $575,000 for new OpenStreetMap tools

Filed under: Geographic Data,Mapping,Maps,Open Street Map — Patrick Durusau @ 5:22 pm

Foundation grants $575,000 for new OpenStreetMap tools

From the post:

The Knight Foundation has awarded a $575,000 grant to Washington-DC-based data visualisation and mapping firm Development Seed to work on new tools for OpenStreetMap (OSM). The Knight Foundation is a non-profit organisation dedicated to supporting quality journalism, media innovation and engaging communities. The award is one of six made by the Knight Foundation as part of Knight News Challenge: Data.

The funding will be used by developers from MapBox, part of Development Seed that designs maps using OSM data, to create three new open source tools for the OSM project to “lower the threshold for first time contributors”, while also making data “easier to consume by providing a bandwidth optimised data delivery system”.

Topic maps with geographic data are a sub-set of topic maps over all but its an important use case. And it is easy for people to relate to a “map” that looks like a “map.” Takes less mental effort. (One of those “slow” thinking things.) šŸ˜‰

Looking forward to more good things to come from OpenStreetMaps!

Relatively Prime

Filed under: Mathematics — Patrick Durusau @ 4:34 pm

Relatively Prime: Stories from the Mathematical Domain

Stories about mathematics that I think will catch your interest.

I first saw this at Four short links: 24 September 2012 by Nat Torkington

The tyranny of algorithms

Filed under: Algorithms,Computation — Patrick Durusau @ 4:18 pm

The tyranny of algorithms by Kaiser Fung.

This WSJ book review on “The Tyranny of Algorithms” (link) is well worth reading for those interested in how computer algorithms are used in business and government. I agree with most of what this author has to say.

You need to read Kaiser’s comment on the review before proceeding….

Back?

I am not altogether sure that algorithms are the problem the book and/or review make they out to be.

No doubt the concerns and problems described are real, but they don’t exist because of algorithms.

Rather they exist because we are basically lazy and accept the result of algorithms, just like we accept the judgements of others, advertising, etc.

Were we to question algorithms, judgements of others, advertising, we might be living in a very different world.

But we don’t, so we’re not.

So the question is, how to live with algorithms knowing we are too lazy to question them. Yes?

Are these shadows/echoes of Thinking, Fast and Slow?

« Newer PostsOlder Posts »

Powered by WordPress