Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

February 19, 2017

Taking The Pressure Off Standing Rock

Filed under: #DAPL,Government,Protests — Patrick Durusau @ 2:49 pm

Standing Rock is standing firm:

However, their historic betrayers, the Department of Indian Affairs, and more recent betrayers, their own tribal council, are aligned to focus their efforts on the water protectors.

One of the disadvantages Standing Rock faces is government sycophants who favor the pipeline can focus all their efforts at Standing Rock.

Consider this illustration of spreading their efforts over a wider area, say the 1,172 miles of the pipeline:

One or two breaches might be manageable and repairs would make economic sense. What about five major breaches? Or perhaps 10 major breaches? Each one in different sections and not too overlapping in time.

Interest, as you know, runs on loans 24 x 7 and repairs drive up the break even point for any endeavor.

Hemorrhaging cash at multiple locations isn’t sustainable, even for large foreign banks. Eventually, how long is unknown until figures come in for repairs, etc., the entire pipeline will be unprofitable and abandoned.

In the mean time, those points where cash is being lost by the barrel full (sorry), will capture the attention of investors.

Protecting DAPL From Breaches (Maps and Hunting Safety)

Filed under: #DAPL,Government,Protests — Patrick Durusau @ 2:16 pm

Any breach in the 1,172 length of the DAPL pipeline renders it useless.

Local sheriffs, underfunded and short staffed, are charged with guarding DAPL’s 1,172 length, in addition to serving their communities.

Places to patrol include heavy equipment rental companies in Illinois, Iowa, North Dakota and South Dakota.

Sheriff’s won’t have to pay overtime and these maps will help deputies reach their patrol areas every day:

Illinois heavy equipment rental

Iowa heavy equipment rental

North Dakota heavy equipment rental

South Dakota heavy equipment rental

Hunting/Police Safety

Hunters have long used pipelines as lines of sight, which could put deputies patrolling the pipeline in harms way. Sheriffs should advertise the patrol locations of deputies well in advance. Due to their professionalism, you won’t find any breaches being made in the pipeline in areas under active deputy patrols.

Observations

Some people may question the effectiveness of patrolling heavy equipment rental companies and announced deputy patrols of the pipeline. But sheriffs juggle competing demands for resources and the good of their local community everyday.

A community that sees higher restaurant, motel, employment figures as breaches are repaired.

If I were a sheriff, I would also bear in mind the local community votes in elections, not foreign banks.

February 18, 2017

Data Breach Digest 2017 (Verizon)

Filed under: Cybersecurity,Security — Patrick Durusau @ 5:24 pm

Data Breach Digest (Verizon)

From the report:

The Situation Room

Data breaches are complex affairs often involving some combination of human factors, hardware devices, exploited configurations or malicious software. As can be expected, data breach response activities—investigation, containment, eradication, notification, and recovery—are proportionately complex.

These response activities, and the lingering post-breach aftereffects, aren’t just an IT security problem; they’re an enterprise problem involving Legal Counsel, Human Resources, Corporate Communications and other Incident Response (IR) stakeholders. Each of these stakeholders brings a slightly different perspective to the breach response effort.

Last year, thousands of IR and cybersecurity professionals delved into the inaugural “Data Breach Digest—Scenarios from the Field” (aka “the RISK Team
Ride-Along Edition”) to get a first-hand look into the inner workings of data breaches from an investigative response point of view (PoV).

Continued research into our recent caseload still supports our initial inklings that just over a dozen or so prevalent scenarios occur at any given time. Carrying forward from last year, we have come to realize that these data breach scenarios aren’t so much about threat actors, or even about the vulnerabilities they exploited, but are more about the situations in which the victim organizations and their IR stakeholders find themselves. This gives each scenario a distinct personality … a unique persona, per se.

This year, for the “Data Breach Digest—Perspective is Reality” (aka “the IR Stakeholder Edition”), we took a slightly different approach in bringing these scenarios to life. Each scenario narrative—again, based on real-world data breach response activities—is told from a different stakeholder PoV. As such, the PoV covers their critical decision pivot points, split-second actions taken, and crucial lessons learned from cases investigated by us – the Verizon RISK Team.
… (emphasis in original)

The “scenario” table mapping caught my eye:

The Scenari-cature names signal an amusing and engaging report awaits!

A must read!

To make up for missing this last year, here’s a link to 2016 Data Breach Digest.

Activists! Another Windows Vulnerability

Filed under: Cybersecurity,Microsoft,Security — Patrick Durusau @ 4:06 pm

If software vulnerabilities were the new it bleeds it leads, news organizations would report on little else.

Still, you have to credit The Hacker News with a great graphic for Google Discloses Windows Vulnerability That Microsoft Fails To Patch, Again! by Swati Khandelwal.

Microsoft is once again facing embarrassment for not patching a vulnerability on time.

Yes, Google’s Project Zero team has once again publicly disclosed a vulnerability (with POC exploit) affecting Microsoft’s Windows operating systems ranging from Windows Vista Service Pack 2 to the latest Windows 10 that had yet to be patched.
… (emphasis in original)

The Google report is more immediately useful but far less amusing that this post by Swati Khandelwal.

Swati reports that without an emergency patch from Microsoft this month, attackers have almost 30 days to exploit this vulnerability.

No rush considering the Verizon 2016 Data Breach Investigations Report shows hacks known since before 1999 are still viable:

Taking that into account, plus the layering of insecure software on top of insecure software strategy of most potential targets:


According to the Cisco 2017 Security Capabilities Benchmark Study, most companies use more than five security vendors and more than five security products in their environment. Fifty-five percent of the security professionals use at least six vendors; 45 percent use anywhere from one to five vendors; and 65 percent use six or more products.
… (Cisco 2017 Annual Cybersecurity Report, page 5)

Small targets could be more secure by going bare and pointing potential attackers to bank, competitor and finance targets with a BetterTargetsREADME file. (Warning: That is an untested suggestion.)

February 17, 2017

Paying To Avoid A Scarlet A

Filed under: Cybersecurity,Security — Patrick Durusau @ 9:05 pm

Two-thirds of US companies would pay to avoid public shaming scandals after a breach by Razvan Muresan

From the post:

Some 66% of companies would pay an average of $124k to avoid public shaming scandals following a security breach, according to a Bitdefender survey of 250 IT decision makers in the United States in companies with more than 1,000 PCs.

Some 14 percent would pay more than $500k, confirming that negative media headlines could have substantial financial consequences. In a recent case, officials from Verizon, which agreed to buy Yahoo’s core properties for $4.83 billion in July, told reporters that the company has “a reasonable basis” to suspect that the Yahoo security breach, one of the largest ever, could have a meaningful financial impact on the deal, according to multiple reports.

The ransomware report I was reading earlier said that 29% discounts off of original ransom demands are common and the trade tends to the low end, several hundred dollars.

Perhaps Barrons or the Wall Street Journal needs to find its way onto your reading list.

Congressmen Counsel Potential Leakers!

Filed under: Government,Journalism,News,Politics,Reporting — Patrick Durusau @ 8:39 pm

Federal Employees Guide to Sharing Key Information with the Public.

From the webpage:

On February 16, 2017, Congressman Ted W. Lieu (D | Los Angeles County) and Congressman Don Beyer (D | Virginia) released the following resource guide for federal employees who wish to break the Administration’s communications blackout on federal agencies. The guide explains how to safely and responsibly share information, and encourages employees to “Know Your Rights” and “Know Your Options.” In the “Know Your Rights” section, federal employees can learn about which federal laws apply to them. In the “Know Your Options” section, employees can learn about how to safely disseminate information to agency inspectors general and the press. The resource guide also includes links to an in-depth list of federal whistleblower statutes and information about agency inspectors general. The full press release can be found here.

Links to whistleblower resources, etc. follow.

Here’s a screen shot of the top of their guide:

The links for whistleblowers are great but rely upon the you take all the risk, media reaps all the glory model.

Better than no leaks at all but having news organization step up with cyberexpertise to safely extract data sounds like a better model.

Mindstorms

Filed under: Education,Learning — Patrick Durusau @ 8:13 pm

Mindstorms by Seymour Papert.

From the webpage:

Seymour Papert’s Mindstorms was published by Basic Books in 1980, and outlines his vision of children using computers as instruments for learning. A second edition, with new Forewords by John Sculley and Carol Sperry, was published in 1993. The book remains as relevant now as when first published almost forty years ago.

The Media Lab is grateful to Seymour Papert’s family for allowing us to post the text here. We invite you to add your comments and reflections.

From the introduction:

…I believe that certain uses of very powerful computational technology and computational ideas can provide children with new possibilities for learning, thinking, and growing emotionally as well as cognitively….

You should read Mindstorms along with Geek Heresy by Kentaro Toyama.

Toyama gives numerous examples that dispel any naive faith in technology as a cure for social issues.

Given the near ubiquitous presence of computers in first world countries, how do you account for the lack of children with

…new possibilities for learning, thinking, and growing emotionally as well as cognitively….

If new learning or thinking has developed, it’s being very well hidden in national and international news reports.

Maps Enable Searching For DAPL Pipeline Breaches

Filed under: #DAPL,Government,Protests — Patrick Durusau @ 5:03 pm

As I mentioned yesterday, Stopping DAPL – One Breach At A Time, oil cannot flow through the pipeline in the face of known breaches to the pipeline.

But that presumes the ability to monitor the DAPL pipeline.

Someone, perhaps you, will discover a DAPL pipeline breach and notify the press and other responsible parties.

An unknown breach does no good for anyone and can result in environment damage.

If you see a breach, report it!

The question is: Where do you look for breaches of the DAPL pipeline?

Here are maps filed by Dakota Access, LLC, in public hearings, that can help you with your public spirited endeavor.

North Dakota

A Project Aerial Maps

A.2 Avoidance and Exclusion Maps

A.4 Environmental Features Maps

B Tank Terminal Plot Plans

South Dakota

A1 – Project Vicinity Maps

A2 – Topographic Maps

A3 – Soil Maps

A4 – Hydrology Maps

A5 – USGS Landcover/Land Use Field Data Maps

Iowa

Construction Progress Maps (1 of 2) Dated: 12/28/2016

Construction Progress Maps (2 of 2) Dated: 12/28/2016

Illinois

Exhibit E, Project Route Map – Illinois Segment

Exhibit F, Legal Description of Illinois Route

Exhibit G, Landowner List (71 pages with parcel id, full name, addresses)

The maps vary from state to state but are of sufficient quality to enable discovery and monitoring of the pipeline for breaches.

5 Million Fungi

Filed under: Open Science,Science — Patrick Durusau @ 3:15 pm

5 Million Fungi – Every living thing is crawling with microorganisms — and you need them to survive by Dan Fost.

Fungus is growing in Brian Perry’s refrigerator — and not the kind blooming in someone’s forgotten lunch bag.

No, the Cal State East Bay assistant professor has intentionally packed his shelves with 1,500 Petri dishes, each containing a tiny sample of fungus from native and endemic Hawaiian plant leaves. The 45-year-old mycologist (a person who studies the genetic and biochemical properties of fungi, among many other things) figures hundreds of those containers hold heretofore-unknown species.

The professor’s work identifying and cataloguing fungal endophytes — microscopic fungi that live inside plants — carries several important implications. Scientists know little about the workings of these fungi, making them a particularly exciting frontier for examination: Learning about endophytes’ relationships to their host plants could save many endangered species; farmers have begun tapping into their power to help crops build resistance to pathogens; and researchers are interested in using them to unlock new compounds to make crucial medicines for people.

The only problem — finding, naming, and preserving them before it’s too late.
… (emphasis in original)

According to Naveed Davoodian in A Long Way to Go: Protecting and Conserving Endangered Fungi, you don’t need to travel to exotic locales to contribute to our knowledge of fungi in the United States.

Willow Nero, editor of McIlvainea: Journal of American Amateur Mycology writes in Commit to Mycology:


I hope you’ll do your part as a NAMA member by renewing your commitment to mycology—the science, that is. When we convene at the North American foray later this year, our leadership will present (and later publish in this journal) clear guidelines so mycologists everywhere can collect reliable data about fungi as part of the North American Mycoflora Project. We will let you know where to start and how to carry your momentum. All we ask is that you join us. Catalogue them all! Or at least set an ambitious goal for yourself or your local NAMA-affiliated club.

I did peek at the North American Mycoflora Project, which has this challenging slogan:

Without a sequenced specimen, it’s a rumor

Sounds like your kind of folks. 😉

Mycology as a hobby has three distinct positives: One, you are not in front your computer monitor. Two, you are gaining knowledge. Three, (hopefully) you will decide to defend fellow residents who cannot defend themselves.

Ransomware for Activists?

Filed under: Cybersecurity,Ransomware,Security — Patrick Durusau @ 2:15 pm

An F-Secure infographic on ransomware starts:

That sounds a bit harsh don’t you think?

What if the ransomware in question were being used to:

  • Cripple “business as usual” strategies of corporate entities
  • Force divestiture from morally questionable entities or projects
  • Interfere with unlawful surveillance
  • Sanction illegal law enforcement conduct (Think Standing Rock)

Would you still agree with: Abandon All Ethical And Moral Principles[?]”

What if ransomware were used to stop:

  • coal mining companies that dump “excess spoil” in rivers and streams
  • oil transport companies that maintain leaky pipelines
  • usurers such as title pawn companies
  • police and prosecutors who abuse minorities
  • (add your target(s) to the list)

Is that ethical and/or moral?

General state of ransomware, see Evalutating the Customer Journey of Cryto-Ransomware And the Paradox Behind It by F-Secure

Make your own decisions but relinquishing a weapon because your enemy thinks poorly of its use makes no sense to me.

Twitter reduces reach of users it believes are abusive [More Opaque Censorship]

Filed under: Censorship,Free Speech,Twitter — Patrick Durusau @ 11:25 am

Twitter reduces reach of users it believes are abusive

More opaque censorship from Twitter:

Twitter has begun temporarily decreasing the reach of tweets from users it believes are engaging in abusive behaviour.

The new action prevents tweets from users Twitter has identified as being abusive from being displayed to people who do not follow them for 12 hours, thus reducing the user’s reach.

If the user were to mention someone who does not follow them on the social media site, that person would not see the tweet in their notifications. Again, this would last for 12 hours.

If the user who had posted abusive tweets was retweeted by someone else, this tweet would not be able to be seen by people who do not follow them, again reducing their Twitter reach.
… (emphasis in original)

I’m assuming this is one of the changes Ed Ho alluded to in An Update on Safety (February 7, 2017) when he said:

Collapsing potentially abusive or low-quality Tweets:

Our team has also been working on identifying and collapsing potentially abusive and low-quality replies so the most relevant conversations are brought forward. These Tweet replies will still be accessible to those who seek them out. You can expect to see this change rolling out in the coming weeks.
… (emphasis in original)

No announcements for:

  • Grounds for being deemed “abusive.”
  • Process for contesting designation as “abusive.”

Twitter is practicing censorship, the basis for which is opaque and the censored have no impartial public forum for contesting that censorship.

In the interest of space, I forego the obvious historical comparisons.

All of which could have been avoided by granting Twitter users:

The ability to create and share filters for tweets.

Even a crude filtering mechanism should enable me to filter tweets that contain my Twitter handle, but that don’t originate from anyone I follow.

So Ed Ho, why aren’t users being empowered to filter their own streams?

February 16, 2017

Aerial Informatics and Robotics Platform [simulator]

Filed under: Machine Learning,Simulations — Patrick Durusau @ 8:35 pm

Aerial Informatics and Robotics Platform (Microsoft)

From the webpage:

Machine learning is becoming an increasingly important artificial intelligence approach to building autonomous and robotic systems. One of the key challenges with machine learning is the need for many samples — the amount of data needed to learn useful behaviors is prohibitively high. In addition, the robotic system is often non-operational during the training phase. This requires debugging to occur in real-world experiments with an unpredictable robot.

The Aerial Informatics and Robotics platform solves for these two problems: the large data needs for training, and the ability to debug in a simulator. It will provide realistic simulation tools for designers and developers to seamlessly generate the copious amounts of training data they need. In addition, the platform leverages recent advances in physics and perception computation to create accurate, real-world simulations. Together, this realism, based on efficiently generated ground truth data, enables the study and execution of complex missions that might be time-consuming and/or risky in the real-world. For example, collisions in a simulator cost virtually nothing, yet provide actionable information for improving the design.

Open source simulator from Microsoft for drones.

How very cool!

Imagine training your drone to search for breaches of the Dakota Access pipeline.

Or how to react when it encounters hostile drones.

Enjoy!

behind the scenes: cleaning dirty data

Filed under: Data — Patrick Durusau @ 5:11 pm

behind the scenes: cleaning dirty data

From the post:

Dirty Data. It’s everywhere! And that’s expected and ok and even frankly good imho — it happens when people are doing complicated things, in the real world, with lots of edge cases, and moving fast. Perfect is the enemy of good.

Alas it’s definitely behind-the-scenes work to find and fix dirty data problems, which means none of us learn from each other in the process. So — here’s a quick post about a dirty data issue we recently dealt with  Hopefully it’ll help you feel comradery, and maybe help some people using the BASE data.

We traced some oaDOI bugs to dirty records from PMC in the BASE open access aggregation database.

BASE = Bielefeld Academic Search Engine.

oaDOI = oaDOI (similar to DOI but points to open access version)

PMC = PubMed Central.

Are you cleaning data or contributing more dirty data?

Can You Replicate Your Searches?

Filed under: Bioinformatics,Biomedical,Medical Informatics,Search Engines,Searching — Patrick Durusau @ 4:30 pm

A comment at PubMed raises the question of replicating reported literature searches:

From the comment:

Mellisa Rethlefsen

I thank the authors of this Cochrane review for providing their search strategies in the document Appendix. Upon trying to reproduce the Ovid MEDLINE search strategy, we came across several errors. It is unclear whether these are transcription errors or represent actual errors in the performed search strategy, though likely the former.

For instance, in line 39, the search is “tumour bed boost.sh.kw.ti.ab” [quotes not in original]. The correct syntax would be “tumour bed boost.sh,kw,ti,ab” [no quotes]. The same is true for line 41, where the commas are replaced with periods.

In line 42, the search is “Breast Neoplasms /rt.sh” [quotes not in original]. It is not entirely clear what the authors meant here, but likely they meant to search the MeSH heading Breast Neoplasms with the subheading radiotherapy. If that is the case, the search should have been “Breast Neoplasms/rt” [no quotes].

In lines 43 and 44, it appears as though the authors were trying to search for the MeSH term “Radiotherapy, Conformal” with two different subheadings, which they spell out and end with a subject heading field search (i.e., Radiotherapy, Conformal/adverse events.sh). In Ovid syntax, however, the correct search syntax would be “Radiotherapy, Conformal/ae” [no quotes] without the subheading spelled out and without the extraneous .sh.

In line 47, there is another minor error, again with .sh being extraneously added to the search term “Radiotherapy/” [quotes not in original].

Though these errors are minor and are highly likely to be transcription errors, when attempting to replicate this search, each of these lines produces an error in Ovid. If a searcher is unaware of how to fix these problems, the search becomes unreplicable. Because the search could not have been completed as published, it is unlikely this was actually how the search was performed; however, it is a good case study to examine how even small details matter greatly for reproducibility in search strategies.

A great reminder that replication of searches is a non-trivial task and that search engines are literal to the point of idiocy.

Stopping DAPL – One Breach At A Time

Filed under: #DAPL,Government,Protests — Patrick Durusau @ 4:02 pm

Despite years of opposition and a large number of donations, the Dakota Access pipeline is moving inexorably towards completion. Charlie Northcott writes in Dakota Access pipeline: Is the Standing Rock movement defeated?:

“Our hope is that the new administration in Washington will now provide North Dakota law enforcement the necessary resources to bring closure to the protests,” said Kyle Kirchmeier, the sheriff of the local Morton County Police, in a press release.

The last 1.5 mile (2.4 km) stretch of the pipeline is expected to be completed in less than 90 days.

Kyle “Bull Connor” Kirchmeier is the sheriff responsible for spraying Standing Rock protesters with water canon in sub-freezing weather. A real piece of work.

For speculation purposes, let’s assume the government does overwhelm the protesters at Standing Rock.

Aside from completion, what does the 1,172 miles of DAPL require to be used?

bakken_pipeline_map-460

It must have no known holes.

That is to say that if the pipeline were breached and that breach was known to the operator (as well as members of the press), no oil would flow.

Yes?

What do we know about the DAPL pipeline?

First, since the pipeline can be approached from either side, there is 2,344 miles of land for staging actions against the integrity of the pipeline.

The pipeline’s right of way is described in: Dakota Access Pipeline Project, U.S. Fish and Wildlife Service, Environmental Assessment, Grassland and Wetland Easement Crossings (May 2016):


Construction of the new pipeline would require a typical construction right-of-way (ROW) width of 125 feet in uplands, 100 feet in non-forested wetlands, 85 feet in forested areas (wetlands and uplands), and up to 150 feet in agricultural areas. Following construction, a 50-foot wide permanent easement would be retained along the pipeline. … (page 12)

Which means staging areas for pipeline interference activities can be located less than 30 yards (for US football fans) from the DAPL pipeline on either side.

A propaganda site for the DAPL builders helpfully notes:

99.98% of the pipeline is installed on privately owned property in North Dakota, South Dakota, Iowa, and Illinois. The Dakota Access Pipeline does not enter the Standing Rock Sioux reservation at any point.

Which of course means that you can lawfully, with the land owner’s permission, park a backhoe,

backhoe-loader-digging2-460

or, a bulldozer,

bulldozer-20626-2957877-460

quite close to the location of the DAPL pipeline.

Backhoes, bulldozers and suitable heavy equipment come in a wide variety of makes and models so these images are illustrative only.

The propaganda site I mentioned earlier also notes:


The Dakota Access Pipeline is an entirely underground pipeline. Only where there are pump stations or valves of testing stations is there any portion of the pipeline above ground. The pipeline is buried nearly 4 feet deep in most areas and in all agricultural lands, two feet deeper than required by law.

which if you remember your army training:

fighting-position-460

(The Infantry Rifle Platoon and Squad, FM 3-21.8 (FM 7-8) March, 2007, page 8-35.)

puts the DAPL pipeline within easy reach of one of these:

USMC_ETool-460

Of course, an ordinary shovel works just as well.

shovel-460

Anyone breaching or damaging the pipeline will be guilty of a variety of federal and state crimes and therefore should not do so.

If you discover a breach in the pipeline, however, you should document its location with a GPS phone and send the image to both local law enforcement and news organizations.

You will need maps to make sure you have discovered a breach in DAPL for reporting. I have some maps that will help. More on 17 February 2017.

DataBASIC

Filed under: Data Science,Education — Patrick Durusau @ 3:37 pm

DataBASIC

Not for you but an interesting resource for introducing children to working with data.

Includes WordCounter, WTFcsv, SameDiff and ConnectTheDots.

The network template is a csv file with a header, two fields separated by commas.

Pick the right text/examples and you could have a class captivated pretty quickly.

Enjoy!

Bypassing ALLR Protection on 22 CPU Architectures (Why This Is Good News!)

Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 3:11 pm

A Simple JavaScript Exploit Bypasses ASLR Protection On 22 CPU Architectures by Swati Khandelwal.

From the post:

Security researchers have discovered a chip flaw that could nullify hacking protections for millions of devices regardless of their operating system or application running on them, and the worse — the flaw can not be entirely fixed with any mere software update.

The vulnerability resides in the way the memory management unit (MMU), a component of many CPUs, works and leads to bypass the Address Space Layout Randomization (ASLR) protection.

ASLR is a crucial security defense deployed by all modern operating systems from Windows and Linux to macOS, Android, and the BSDs.

In general, ASLR is a memory protection mechanism which randomizes the location where programs run in a device’s memory. This, in turn, makes it difficult for attackers to execute malicious payloads in specific spots in memory when exploiting buffer overflows or similar bugs.

In short, for attackers, it’s like an attempt to burglarize a house blindfolded.

But now a group of researchers, known as VUSec, from the Vrije University in the Netherlands have developed an attack that can bypass ASLR protection on at least 22 processor micro-architectures from popular vendors like Intel, AMD, ARM, Allwinner, Nvidia, and others.

The attack, dubbed ASLR Cache or AnC, is particularly serious because it uses simple JavaScript code to identify the base addresses in memory where system and application components are executed.

So, merely visiting a malicious site can trigger the attack, which allows attackers to conduct more attacks targeting the same area of the memory to steal sensitive information stored in the PC’s memory.

See Swati’s post for two videos demonstrating this unpatchable security flaw in action.

For a more formal explanation of the flaw,

ASLR on the Line: Practical Cache Attacks on the MMU by Ben Gras, et al.

Abstract:

Address space layout randomization (ASLR) is an important first line of defense against memory corruption attacks and a building block for many modern countermeasures. Existing attacks against ASLR rely on software vulnerabilities and/or on repeated (and detectable) memory probing.

In this paper, we show that neither is a hard requirement and that ASLR is fundamentally insecure on modern cachebased architectures, making ASLR and caching conflicting requirements (ASLR⊕Cache, or simply AnC). To support this claim, we describe a new EVICT+TIME cache attack on the virtual address translation performed by the memory management unit (MMU) of modern processors. Our AnC attack relies on the property that the MMU’s page-table walks result in caching page-table pages in the shared last-level cache (LLC). As a result, an attacker can derandomize virtual addresses of a victim’s code and data by locating the cache lines that store the page-table entries used for address translation.

Relying only on basic memory accesses allows AnC to be implemented in JavaScript without any specific instructions or software features. We show our JavaScript implementation can break code and heap ASLR in two major browsers running on the latest Linux operating system with 28 bits of entropy in 150
seconds. We further verify that the AnC attack is applicable to every modern architecture that we tried, including Intel, ARM and AMD. Mitigating this attack without naively disabling caches is hard, since it targets the low-level operations of the MMU. We conclude that ASLR is fundamentally flawed in sandboxed environments such as JavaScript and future defenses should not rely on randomized virtual addresses as a building block.

and,

Reverse Engineering Hardware Page Table Caches Using Side-Channel Attacks on the MMU by Stephan van Schaik, et al.

Abstract:

Recent hardware-based attacks that compromise systems with Rowhammer or bypass address-space layout randomization rely on how the processor’s memory management unit (MMU) interacts with page tables. These attacks often need to reload page tables repeatedly in order to observe changes in the target system’s behavior. To speed up the MMU’s page table lookups, modern processors make use of multiple levels of caches such as translation lookaside buffers (TLBs), special-purpose page table caches and even general data caches. A successful attack needs to flush these caches reliably before accessing page tables. To flush these caches from an unprivileged process, the attacker needs to create specialized memory access patterns based on the internal architecture and size of these caches, as well as on how the caches interact with each other. While information about TLBs and data caches are often reported in processor manuals released by the vendors, there is typically little or no information about the properties of page table caches on
different processors. In this paper, we retrofit a recently proposed EVICT+TIME attack on the MMU to reverse engineer the internal architecture, size and the interaction of these page table caches with other caches in 20 different microarchitectures from Intel, ARM and AMD. We release our findings in the form of a library that provides a convenient interface for flushing these caches as well as automatically reverse engineering page table caches on new architectures.

So, Why Is This Good News?

Everything exists in a context and security flaws are no exception to that rule.

For example, H.J.Res.41 – Providing for congressional disapproval under chapter 8 of title 5, United States Code, of a rule submitted by the Securities and Exchange Commission relating to “Disclosure of Payments by Resource Extraction Issuers” reads in part:


Resolved by the Senate and House of Representatives of the United States of America in Congress assembled, That Congress disapproves the rule submitted by the Securities and Exchange Commission relating to “Disclosure of Payments by Resource Extraction Issuers” (published at 81 Fed. Reg. 49359 (July 27, 2016)), and such rule shall have no force or effect.
… (emphasis in original)

That may not sound like much until you read Disclosure of Payments by Resource Extraction Issuers, issued by the Security and Exchange Commission (SEC), which reads in part:


SUMMARY:

We are adopting Rule 13q-1 and an amendment to Form SD to implement Section 1504 of the Dodd-Frank Wall Street Reform and Consumer Protection Act relating to the disclosure of payments by resource extraction issuers. Rule 13q-1 was initially adopted by the Commission on August 22, 2012, but it was subsequently vacated by the U.S. District Court for the District of Columbia. Section 1504 of the Dodd-Frank Act added Section 13(q) to the Securities Exchange Act of 1934, which directs the Commission to issue rules requiring resource extraction issuers to include in an annual report information relating to any payment made by the issuer, a subsidiary of the issuer, or an entity under the control of the issuer, to a foreign government or the Federal Government for the purpose of the commercial development of oil, natural gas, or minerals. Section 13(q) requires a resource extraction issuer to provide information about the type and total amount of such payments made for each project related to the commercial development of oil, natural gas, or minerals, and the type and total amount of payments made to each government. In addition, Section 13(q) requires a resource extraction issuer to provide information about those payments in an interactive data format.
… (emphasis in original)

Or as By Alex Guillén says in Trump signs bill killing SEC rule on foreign payments:

President Donald Trump Tuesday signed the first in a series of congressional regulatory rollback bills, revoking an Obama-era regulation that required oil and mining companies to disclose their payments to foreign governments.

The danger posed to global corruption by this SEC rule has passed.

What hasn’t passed is the staffs of foreign governments and resource extraction issuers remain promiscuous web surfers.

Web surfers who will easily fall prey to a JavaScript exploit that bypasses ASLR protection!

Rather than protecting global corruption, H.J.Res 41 increases the incentives for breaching the networks of foreign governments and resource extraction issuers. You may find payment information and other embarrassing and/or incriminating information.

ASLR Cache or AnC gives you another tool for mining the world of the elites.

Rejoice at every new systemic security flaw. The elites have more to hide than youthful indiscretions and records of poor marital fidelity.

New MorphGNT Releases and Accentuation Analysis

Filed under: Bible,Greek,Linguistics,Manuscripts — Patrick Durusau @ 11:33 am

New MorphGNT Releases and Accentuation Analysis by James Tauber.

From the post:

Back in 2015, I talked about Annotating the Normalization Column in MorphGNT. This post could almost be considered Part 2.

I recently went back to that work and made a fresh start on a new repo gnt-accentuation intended to explain the accentuation of each word in the GNT (and eventually other Greek texts). There’s two parts to that: explaining why the normalized form is accented the way it but then explaining why the word-in-context might be accented differently (clitics, etc). The repo is eventually going to do both but I started with the latter.

My goal with that repo is to be part of the larger vision of an “executable grammar” I’ve talked about for years where rules about, say, enclitics, are formally written up in a way that can be tested against the data. This means:

  • students reading a rule can immediately jump to real examples (or exceptions)
  • students confused by something in a text can immediately jump to rules explaining it
  • the correctness of the rules can be tested
  • errors in the text can be found

It is the fourth point that meant that my recent work uncovered some accentuation issues in the SBLGNT, normalization and lemmatization. Some of that has been corrected in a series of new releases of the MorphGNT: 6.08, 6.09, and 6.10. See https://github.com/morphgnt/sblgnt/releases for details of specifics. The reason for so many releases was I wanted to get corrections out as soon as I made them but then I found more issues!

There are some issues in the text itself which need to be resolved. See the Github issue https://github.com/morphgnt/sblgnt/issues/52 for details. I’d very much appreciate people’s input.

In the meantime, stay tuned for more progress on gnt-accentuation.

Was it random chance that I saw this announcement from James and Getting your hands dirty with the Digital Manuscripts Toolkit on the same day?

😉

I should mention that Codex Sinaiticus (second oldest witness to the Greek New Testament) and numerous other Greek NT manuscripts have been digitized by the British Library.

Paring these resources together offers a great opportunity to discover the Greek NT text as choices made by others. (Same holds true for the Hebrew Bible as well.)

Getting your hands dirty with the Digital Manuscripts Toolkit

Filed under: Digital Research,Library,Manuscripts — Patrick Durusau @ 11:01 am

Getting your hands dirty with the Digital Manuscripts Toolkit by Emma Stanford. (3 March 2017 3.00pm — 5.00pm Venue: Centre for Digital Scholarship, Weston Library (Map)

From the webpage:

In this workshop offered jointly by Bodleian Digital Library Systems and Services and the Centre for Digital Scholarship, you’ll learn how to make the most of the digitized resources at the Bodleian, the BnF, the Vatican Library and a host of other institutions, using software tools built around the International Image Interoperability Framework (IIIF). After a brief introduction to the main concepts of IIIF, you’ll learn how to use Mirador and the Digital Manuscripts Toolkit to gather images from different institutions into a single viewer; rearrange, remix and enhance image sequences and add new descriptive metadata; add transcriptions and annotations to digitized images; and embed zoomable images or whole manuscripts into your own website or blog. You’ll leave with your own virtual workspace, stocked with the images you’re using.

This event is open to all. No technological or scholarly expertise is necessary. The workshop will be most useful if you already have a few digitized books or manuscripts in mind that you’d like to work with, but if you don’t, we can help you find some. In addition to manuscripts, the tools can be applied to digitized printed books, maps, paintings and ephemera.

To participate in the workshop, you will need your own laptop, with internet access via eduroam or the Bodleian Libraries network.

If you are planning on being at the Bodleian on 3 March 2017, call ahead to reserve a seat for this free event!

If not, explore Mirador and the Digital Manuscripts Toolkit on your own.

Investigating A Cyberwar

Filed under: Cybersecurity,Government,Politics,Security — Patrick Durusau @ 10:34 am

Investigating A Cyberwar by Juliana Ruhfus.

From the post:

Editor’s Note: As the Syrian civil war has played out on the battlefields with gunshots and mortars, a parallel conflict has been fought online. The Syrian Electronic Army (SEA), a pro-Assad government group of hackers, has wielded bytes and malware to obtain crucial information from opponents of the Assad regime. The extracted information has led to arrests and torture of dissidents. In this interview, GIJN’s Eunice Au talks to Al Jazeera’s Juliana Ruhfus about the methodology and challenges of her investigation into the SEA and the process of transforming the story into an online game.

How did the idea for a documentary on the SEA come about? Who was part of your investigative team and how long did it take?

I had the idea for the film when I came across a report called “Behind Syria’s Digital Frontline,” published by a company called FireEye, cybersecurity analysts who had come across a cache of 30,000 Skype conversations that pro-Assad hackers had stolen from anti-Assad fighters. The hack provided a unique insight into the strategic intelligence that had been obtained from the Skype conversations, including Google images plans that outlined the battle at Khirbet Ghazaleh and images of missiles which the rebels were trying to purchase.

The fascinating thing was, it also shed light on how the hack was carried out. Pro-Assad hackers had created female avatars who befriended fighters on the front line by telling them how much they admired them and eventually asked to exchange photos. These images were infected with malware which proved devastating once downloaded. Computers in the field are shared by many fighters, allowing the hackers to spy on a large number of targets at once.

When I read the report I had the Eureka moment that I wait for when I am looking for a new idea: I could visualize the “invisible” cyberwar story and, for the first time ever, I really understood the crucial role that social engineering plays in hacking, that is the hacker’s psychological skill to get someone to click on an infected link.

I then shot the film together with director Darius Bazargan. Ozgur Kizilatis and Alexander Niakaris both did camera work and Simon Thorne was the editor. We filmed in London, Turkey, and France, and all together the production took just under three months.
… (emphasis in original)

C-suite level material but quite good, if a bit heavy-handed in its support for rebel forces in Syria. I favor the foxes over the hounds as well but prefer a more balanced approach to the potential of cyberwarfare.

Cyberweapons have the potential to be great equalizers with conventional forces. Punishing the use or supplying of cyberweapons, as Juliana reports here, is more than a little short-sighted. True, the Assad regime may have the cyber advantage today, but what about tomorrow? Or other governments?

“Tidying” Up Jane Austen (R)

Filed under: Literature,R,Text Mining — Patrick Durusau @ 9:29 am

Text Mining the Tidy Way by Julia Silge.

Thanks to Julia’s presentation I now know there is an R package with all of Jane Austen’s novels ready for text analysis.

OK, Austen may not be at the top of your reading list, but the Tidy techniques Julia demonstrates are applicable to a wide range of textual data.

Among those mentioned in the presentation, NASA datasets!

Julia, along with Dave Robinson, wrote: Text Mining with R: A Tidy Approach, available online now and later this year from O’Reilly.

February 15, 2017

EFF Dice-Generated Passphrases

Filed under: Cybersecurity,Security — Patrick Durusau @ 10:20 pm

EFF Dice-Generated Passphrases

From the post:

Create strong passphrases with EFF’s new random number generators! This page includes information about passwords, different wordlists, and EFF’s suggested method for passphrase generation. Use the directions below with EFF’s random number generator member gift or your own set of dice.

Ah, EFF random number generator member gift. 😉

Or you can order five Bicycle dice from Amazon. (Search for dice while you are there. I had no idea there were so many distinct dice sets.)

It’s mentioned but not emphasized that many sites don’t allow passphrases. Which forces you to fall back onto passwords. A password manager enables you to use different, strong passwords for every account.

Password managers should always be protected by strong passphrases. Keys to the kingdom as it were.

big-list-of-naughty-strings

Filed under: Cybersecurity,Security — Patrick Durusau @ 9:58 pm

big-list-of-naughty-strings by Max Woolf.

From the webpage:

The Big List of Naughty Strings is a list of strings which have a high probability of causing issues when used as user-input data.

You won’t see any of these strings on the Tonight Show with Jimmy Fallon. 😉

They are “naughty” when used as user-input data.

For those searching for a starting point for legal liability, failure to test and/or document testing against this data set would be a good place to start.

Have you tested against the big-list-of-naughty-strings?

Amazon Chime – AES 256-bit Encryption Secure – Using Whose Key?

Filed under: Cybersecurity,Privacy — Patrick Durusau @ 9:05 pm

Amazon Chime, Amazon’s competitor to Skype, WebEx and Google Hangouts.

I’m waiting on answers about why the Chime Dialin Rates page omits all of Africa, as well as Burma, Cambodia, Laos and Thailand.

While I wait for that answer, have you read the security claim for Chime?

Security:


Amazon Chime is an AWS service, which means you benefit from a data center and network architecture built to meet the requirements of the most security-sensitive organizations. In addition, Amazon Chime features security capabilities built directly into the service. Messages, voice, video, and content are encrypted using AES 256-bit encryption. The visual roster makes it easy to see who has joined the meeting, and meetings can be locked so that only authenticated users can join.

We have all heard stories of the super strength of AES 256-bit encryption:


As shown above, even with a supercomputer, it would take 1 billion billion years to crack the 128-bit AES key using brute force attack. This is more than the age of the universe (13.75 billion years). If one were to assume that a computing system existed that could recover a DES key in a second, it would still take that same machine approximately 149 trillion years to crack a 128-bit AES key.
… (How secure is AES against brute force attacks? by Mohit Arora.)

Longer than the universe is old! That’s secure.

Or is it?

Remember the age of universe example is a brute force attack.

What if an FBI agent shows up with a National Security Letter (NSL)?

Or a conventional search warrant demanding the decrypted content of a Chime conversation?

Unlocking AES encryption with the key is quite fast.

Yes?

PS: This isn’t a weakness limited to Chime. Any encryption where the key is not under your control is be definition insecure.

Unmet Needs for Analyzing Biological Big Data… [Data Integration #1 – Spells Market Opportunity]

Filed under: BigData,Bioinformatics,Biomedical,Data Integration,Data Management,Data Mining — Patrick Durusau @ 8:09 am

Unmet Needs for Analyzing Biological Big Data: A Survey of 704 NSF Principal Investigators by Lindsay Barone, Jason Williams, David Micklos.

Abstract:

In a 2016 survey of 704 National Science Foundation (NSF) Biological Sciences Directorate principle investigators (BIO PIs), nearly 90% indicated they are currently or will soon be analyzing large data sets. BIO PIs considered a range of computational needs important to their work, including high performance computing (HPC), bioinformatics support, multi-step workflows, updated analysis software, and the ability to store, share, and publish data. Previous studies in the United States and Canada emphasized infrastructure needs. However, BIO PIs said the most pressing unmet needs are training in data integration, data management, and scaling analyses for HPC, acknowledging that data science skills will be required to build a deeper understanding of life. This portends a growing data knowledge gap in biology and challenges institutions and funding agencies to redouble their support for computational training in biology.

In particular, needs topic maps can address rank #1, #2, #6, #7, and #10, or as found by the authors:


A majority of PIs—across bioinformatics/other disciplines, larger/smaller groups, and the four NSF programs—said their institutions are not meeting nine of 13 needs (Figure 3). Training on integration of multiple data types (89%), on data management and metadata (78%), and on scaling analysis to cloud/HP computing (71%) were the three greatest unmet needs. High performance computing was an unmet need for only 27% of PIs—with similar percentages across disciplines, different sized groups, and NSF programs.

or graphically (figure 3):

So, cloud, distributed, parallel, pipelining, etc., processing is insufficient?

Pushing undocumented and unintegratable data at ever increasing speeds is impressive but gives no joy?

This report will provoke another round of Esperanto fantasies, that is the creation of “universal” vocabularies, which if used by everyone and back-mapped to all existing literature, would solve the problem.

The number of Esperanto fantasies and the cost/delay of back-mapping to legacy data defeats all such efforts. Those defeats haven’t prevented repeated funding of such fantasies in the past, present and no doubt the future.

Perhaps those defeats are a question of scope.

That is rather than even attempting some “universal” interchange of data, why not approach it incrementally?

I suspect the PI’s surveyed each had some particular data set in mind when they mentioned data integration (which itself is a very broad term).

Why not seek out, develop and publish data integrations in particular instances, as opposed to attempting to theorize what might work for data yet unseen?

The need topic maps wanted to meet remains unmet. With no signs of lessening.

Opportunity knocks. Will we answer?

February 14, 2017

The Rise of the Weaponized AI Propaganda Machine

Filed under: Artificial Intelligence,Government,Politics — Patrick Durusau @ 8:48 pm

The Rise of the Weaponized AI Propaganda Machine by Berit Anderson and Brett Horvath.

From the post:

“This is a propaganda machine. It’s targeting people individually to recruit them to an idea. It’s a level of social engineering that I’ve never seen before. They’re capturing people and then keeping them on an emotional leash and never letting them go,” said professor Jonathan Albright.

Albright, an assistant professor and data scientist at Elon University, started digging into fake news sites after Donald Trump was elected president. Through extensive research and interviews with Albright and other key experts in the field, including Samuel Woolley, Head of Research at Oxford University’s Computational Propaganda Project, and Martin Moore, Director of the Centre for the Study of Media, Communication and Power at Kings College, it became clear to Scout that this phenomenon was about much more than just a few fake news stories. It was a piece of a much bigger and darker puzzle — a Weaponized AI Propaganda Machine being used to manipulate our opinions and behavior to advance specific political agendas.

By leveraging automated emotional manipulation alongside swarms of bots, Facebook dark posts, A/B testing, and fake news networks, a company called Cambridge Analytica has activated an invisible machine that preys on the personalities of individual voters to create large shifts in public opinion. Many of these technologies have been used individually to some effect before, but together they make up a nearly impenetrable voter manipulation machine that is quickly becoming the new deciding factor in elections around the world.

Before you get too panicked, remember the techniques attributed to Cambridge Analytica were in use in the 1960 Kennedy presidential campaign. And have been in use since then by marketeers for every known variety of product, including politicians.

It’s hard to know if Anderson and Horvath are trying to drum up more business for Cambridge Analytica or if they are genuinely concerned for the political process.

Granting that Cambridge Analytica has more data than was available in the 1960’s but many people, not just Cambridge Analytica have labored on manipulation of public opinion since then.

If people were as easy to sway, politically speaking, as Anderson and Horvath posit, then why is there any political diversity at all? Shouldn’t we all be marching in lock step by now?

Oh, it’s a fun read so long as you don’t take it too seriously.

Besides, if a “weaponized AI propaganda machine” is that dangerous, isn’t the best defense a good offense?

I’m all for cranking up a “demonized AI propaganda machine” if you have the funding.

Yes?

We’re Bringing Learning to Rank to Elasticsearch [Merging Properties Query Dependent?]

Filed under: DSL,ElasticSearch,Merging,Search Engines,Searching,Topic Maps — Patrick Durusau @ 8:26 pm

We’re Bringing Learning to Rank to Elasticsearch.

From the post:

It’s no secret that machine learning is revolutionizing many industries. This is equally true in search, where companies exhaust themselves capturing nuance through manually tuned search relevance. Mature search organizations want to get past the “good enough” of manual tuning to build smarter, self-learning search systems.

That’s why we’re excited to release our Elasticsearch Learning to Rank Plugin. What is learning to rank? With learning to rank, a team trains a machine learning model to learn what users deem relevant.

When implementing Learning to Rank you need to:

  1. Measure what users deem relevant through analytics, to build a judgment list grading documents as exactly relevant, moderately relevant, not relevant, for queries
  2. Hypothesize which features might help predict relevance such as TF*IDF of specific field matches, recency, personalization for the searching user, etc.
  3. Train a model that can accurately map features to a relevance score
  4. Deploy the model to your search infrastructure, using it to rank search results in production

Don’t fool yourself: underneath each of these steps lie complex, hard technical and non-technical problems. There’s still no silver bullet. As we mention in Relevant Search, manual tuning of search results comes with many of the same challenges as a good learning to rank solution. We’ll have more to say about the many infrastructure, technical, and non-technical challenges of mature learning to rank solutions in future blog posts.

… (emphasis in original)

A great post as always but of particular interest for topic map fans is this passage:


Many of these features aren’t static properties of the documents in the search engine. Instead they are query dependent – they measure some relationship between the user or their query and a document. And to readers of Relevant Search, this is what we term signals in that book.
… (emphasis in original)

Do you read this as suggesting the merging exhibited to users should depend upon their queries?

That two or more users, with different query histories could (should?) get different merged results from the same topic map?

Now that’s an interesting suggestion!

Enjoy this post and follow the blog for more of same.

(I have a copy of Relevant Search waiting to be read so I had better get to it!)

Fundamentals of Functional Programming (email lessons)

Filed under: Functional Programming,Programming — Patrick Durusau @ 7:41 pm

Learn the fundamentals of functional programming — for free, in your inbox by Preethi Kasireddy.

From the post:

If you’re a software developer, you’ve probably noticed a growing trend: software applications keep getting more complicated.

It falls on our shoulders as developers to build, test, maintain, and scale these complex systems. To do so, we have to create well-structured code that is easy to understand, write, debug, reuse, and maintain.

But actually writing programs like this requires much more than just practice and patience.

In my upcoming course, Learning Functional JavaScript the Right Way, I’ll teach you how to use functional programming to create well-structured code.

But before jumping into that course (and I hope you will!), there’s an important prerequisite: building a strong foundation in the underlying principles of functional programming.

So I’ve created a new free email course that will take you on a fun and exploratory journey into understanding some of these core principles.

Let’s take a look at what the email course will cover, so you can decide how it fits into your programming education.
…(emphasis in original)

I haven’t taken an email oriented course in quite some time so interested to see how this contrasts with video lectures, etc.

Enjoy!

February 13, 2017

Deep Learning (MIT Press Book) – Published (and still online)

Filed under: Deep Learning — Patrick Durusau @ 10:16 pm

Deep Learning by Yoshua Bengio, Ian Goodfellow and Aaron Courville.

From the introduction:


1.1 Who Should Read This Book?

This book can be useful for a variety of readers, but we wrote it with two main target audiences in mind. One of these target audiences is university students(undergraduate or graduate) learning about machine learning, including those who are beginning a career in deep learning and artificial intelligence research. The other target audience is software engineers who do not have a machine learning or statistics background, but want to rapidly acquire one and begin using deep learning in their product or platform. Deep learning has already proven useful in many software disciplines including computer vision, speech and audio processing,natural language processing, robotics, bioinformatics and chemistry, video games,search engines, online advertising and finance.

This book has been organized into three parts in order to best accommodate a variety of readers. Part I introduces basic mathematical tools and machine learning concepts. Part II describes the most established deep learning algorithms that are essentially solved technologies. Part III describes more speculative ideas that are widely believed to be important for future research in deep learning.

Readers should feel free to skip parts that are not relevant given their interests or background. Readers familiar with linear algebra, probability, and fundamental machine learning concepts can skip part I, for example, while readers who just want to implement a working system need not read beyond part II. To help choose which chapters to read, figure 1.6 provides a flowchart showing the high-level organization of the book.

We do assume that all readers come from a computer science background. We assume familiarity with programming, a basic understanding of computational performance issues, complexity theory, introductory level calculus and some of the terminology of graph theory.

This promises to be a real delight, whether read for an application space or to get a better handle on deep learning.

How to Listen Better [Not Just For Reporters]

Filed under: Communication,Journalism,News,Reporting — Patrick Durusau @ 9:42 pm

How to Listen Better by Josh Stearns.

From the post:

In my weekly newsletter, The Local Fix, I compiled a list of guides, tools, and examples of how newsrooms can listen more deeply to local communities. I’m sharing it here in case it can be useful to others, and to encourage people to add to the list.

See which of Josh’s resources resonate with you.

These resources are in the context of news/reporting but developing good listening skills is an asset in any field.

Here’s a free tip since you are likely sitting in front of your computer monitor:

If someone comes to talk to you, turn away from your monitor and pay attention to the person speaking.

Seriously, try that for a week and see if your communication with co-workers improves.

PS: Do read posts before you tweet responses to them. As they say, “reading is fundamental.”

« Newer PostsOlder Posts »

Powered by WordPress