The impact of fake news, propaganda and misinformation has been widely scrutinized since the US election. Fake news actually outperformed real news on Facebook during the final weeks of the election campaign, according to an analysis by Buzzfeed, and even outgoing president Barack Obama has expressed his concerns.
But a growing cadre of technologists, academics and media experts are now beginning the quixotic process of trying to think up solutions to the problem, starting with a rambling 100+ page open Google document set up by Upworthy founder Eli Pariser.
Woolf captures the essential wrongness with the now, 120 pages, of suggestions, quoting Claire Wardle:
“The biggest challenge is who wants to be the arbiter of truth and what truth is,” said Claire Wardle, research director for the Tow Center for Digital Journalism at Columbia University. “The way that people receive information now is increasingly via social networks, so any solution that anybody comes up with, the social networks have to be on board.”
Don’t worry, selecting the arbiter of truth and what truth is won’t be difficult.
The authors of these suggestions see their favorite candidate every day:
So long as they aren’t seeing my image (substitute your name/image) in the mirror, I’m not interested in any censorship proposal.
Personally, even if offered the post of Internet Censor, I would turn it down.
I can’t speak for you but I am unable to be equally impartial to all. Nor do I trust anyone else to be equally impartial.
The “solution” to “fake news,” if you think that is a meaningful term, is more news, not less.
Enable users to easily compare and contrast news sources, if they so choose. Freedom means being free to make mistakes as well as good choices (from some point of view).
Many people think that border-related policies impact only people living in border towns like El Paso or San Diego. The reality is that Border Patrol’s interior enforcement operations encroach deep into and across the United States, affecting the majority of Americans.
Roughly two-thirds of the United States’ population, about 200 million people, lives within the 100-mile zone that an outdated federal regulation defines as the border zone—that is, within 100 miles of a U.S. land or coastal border.
Although this zone is not literally “Constitution free”—constitutional protections do still apply—the Border Patrol frequently ignores those protections and runs roughshod over individuals’ civil liberties.
This release features an important security update to Firefox and contains, in addition to that, an update to NoScript (126.96.36.199).
The security flaw responsible for this urgent release is already actively exploited on Windows systems. Even though there is currently, to the best of our knowledge, no similar exploit for OS X or Linux users available the underlying bug affects those platforms as well. Thus we strongly recommend that all users apply the update to their Tor Browser immediately. A restart is required for it to take effect.
Tor Browser users who had set their security slider to “High” are believed to have been safe from this vulnerability.
We will have alpha and hardened Tor Browser updates out shortly. In the meantime, users of these series can mitigate the security flaw in at least two ways:
1) Set the security slider to “High” as this is preventing the exploit from working.
2) Switch to the stable series until updates for alpha and hardened are available, too.
A little over 10 years old now, predating HeartBleed for example, but still an interesting read.
I am and remain an open source advocate but not on the basis of false claims of bug finding. Open source improves your changes of finding spyware. No guarantees but open source improves your chances.
Why any government or enterprise would run closed source software is a mystery to me. Upload all your work to the NSA on a weekly basis. With uploads you create a reminder of your risk, which is missing with non-open source software.
Posted in Cybersecurity, Security | Comments Off on Urgent: Update Your Tor Browser [Today, Yes, Today] + Aside on shallow bugs
John has a target: name, country, brief context, and maybe the email address or website. John has been given a goal: maybe eavesdropping, taking a website offline, or stealing intellectual property. And John has been given constraints: maybe he cannot risk detection, or he has to act within 24 hours, or he cannot reach out to the state-owned telecommunications company for help.
John is a government-backed digital attacker. He sits in an office building somewhere, at a desk. Maybe this is the job he wanted when he was growing up, or maybe it was a way to pay the bills and stretch his technical muscles. He probably has plans for the weekend.
Let’s say, for the sake of this example, that John’s target is Henry, in the same country as John. John’s goal is to copy all the information on Henry’s computer without being detected. John can get help from other government agencies. There’s no rush.
The first thing to realize is that John, like most people, is a busy guy. He’s not going to do more work than necessary. First, he’ll try to use traditional, straightforward techniques — nothing fancy — and only if those methods fail will he try to be more creative with his attack.
Security experts at Check Point have discovered a new very aggressive form of Android malware that already compromised no less than 1 million Google accounts and which can infect approximately 74 percent of the Android phones currently on the market.
The firm warns that the malware which they call Gooligan is injected into a total of 86 Android apps that are delivered through third-party marketplaces (you can check the full list of apps in the box at the end of the article). Once installed, these apps root the phone to get full access to the device and then attempt to deploy malicious software which can be used to steal authentication tokens for Google accounts.
This pretty much gives the attackers full control over the targeted Google accounts, and as long as vulnerable phones have Gmail, Google Drive, Google Chrome, YouTube, Google Photos, or any other Google app that can be used with an account, there’s a big chance that the attack is successful.
…(emphasis in original)
I submitted my email today at Gab and got this message:
Done! You’re #1320420 in the waiting list.
Only three rules:
We have a zero tolerance policy against illegal pornography. Such material will be instantly removed and the owning account will be dealt with appropriately per the advice of our legal counsel. We reserve the right to ban accounts that share such material. We may also report the user to local law enforcement per the advice our legal counsel.
Threats and Terrorism
We have a zero tolerance policy for violence and terrorism. Users are not allowed to make threats of, or promote, violence of any kind or promote terrorist organizations or agendas. Such users will be instantly removed and the owning account will be dealt with appropriately per the advice of our legal counsel. We may also report the user to local and/or federal law enforcement per the advice of our legal counsel.
What defines a ‘terrorist organization or agenda’? Any group that is labelled as a terrorist organization by the United Nations and/or United States of America classifies as a terrorist organization on Gab.
Users are not allowed to post other’s confidential information, including but not limited to, credit card numbers, street numbers, SSNs, without their expressed authorization.
If Gab is listening, I can get the rules down to one:
Court Ordered Removal
When Gab receives a court order from a court of competent jurisdiction ordering the removal of identified, posted content, at (service address), the posted, identified content will be removed.
Simple, fair, gets Gab and its staff out of the censorship business and provides a transparent remedy.
Twitter and Facebook can keep spending uncompensated time and effort trying to be universal and fair censors. Gab has the opportunity to reach up and grab those $100 bills flying overhead for filtered news services.
What is the New York Times if not an opinionated and poorly run filter on all the possible information it could report?
Apply that same lesson to social media!
PS: Seriously, before going public, I would go to the one court-based rule on content. There’s no profit and no wins in censoring any content on your own. Someone will always want more or less. Courts get paid to make those decisions.
Check with your lawyers but if you don’t look at any content, you can’t be charged with constructive notice of it. Unless and until someone points it out, then you have to follow DCMA, court orders, etc.
Each weekday, dozens of U.S. government aircraft take to the skies and slowly circle over American cities. Piloted by agents of the FBI and the Department of Homeland Security (DHS), the planes are fitted with high-resolution video cameras, often working with “augmented reality” software that can superimpose onto the video images everything from street and business names to the owners of individual homes. At least a few planes have carried devices that can track the cell phones of people below. Most of the aircraft are small, flying a mile or so above ground, and many use exhaust mufflers to mute their engines — making them hard to detect by the people they’re spying on.
The government’s airborne surveillance has received little public scrutiny — until now. BuzzFeed News has assembled an unprecedented picture of the operation’s scale and sweep by analyzing aircraft location data collected by the flight-tracking website Flightradar24 from mid-August to the end of December last year, identifying about 200 federal aircraft. Day after day, dozens of these planes circled above cities across the nation.
The FBI and the DHS would not discuss the reasons for individual flights but told BuzzFeed News that their planes are not conducting mass surveillance.
The DHS said that its aircraft were involved with securing the nation’s borders, as well as targeting drug smuggling and human trafficking, and may also be used to support investigations by the FBI and other law enforcement agencies. The FBI said that its planes are only used to target suspects in specific investigations of serious crimes, pointing to a statement issued in June 2015, after reporters and lawmakers started asking questions about FBI surveillance flights.
“It should come as no surprise that the FBI uses planes to follow terrorists, spies, and serious criminals,” said FBI Deputy Director Mark Giuliano, in that statement. “We have an obligation to follow those people who want to hurt our country and its citizens, and we will continue to do so.”
I’m not surprised the FBI follows terrorists, spies, and serious criminals.
What’s problematic is that the FBI follows all of us and then, after the fact, picks out alleged terrorists, spies and serious criminals.
The FBI could just as easily select people on their way to a tryst with a government official’s wife, or to attend an AA meeting, or to attend an unpopular church.
Once collected, the resulting information is subject to any number of uses and abuses.
Aldhous and Seife report the flights drop 70% on the weekend so if you are up to mischief, plan around your weekends.
When writing about the inevitable surveillance excesses under President Trump, give credit to President Obama and his supporters, who built the surveillance state Trump inherited.
Posted in FBI, Government, Privacy | Comments Off on Spies in the Skies [Fostered by Obama, Inherited by Trump]
Tracing its roots to October 1941, CIA’s Cartography Center has a long, proud history of service to the Intelligence Community (IC) and continues to respond to a variety of finished intelligence map requirements. The mission of the Cartography Center is to provide a full range of maps, geographic analysis, and research in support of the Agency, the White House, senior policymakers, and the IC at large. Its chief objectives are to analyze geospatial information, extract intelligence-related geodata, and present the information visually in creative and effective ways for maximum understanding by intelligence consumers.
Since 1941, the Cartography Center maps have told the stories of post-WWII reconstruction, the Suez crisis, the Cuban Missile crisis, the Falklands War, and many other important events in history.
In cartographic heritage we suddenly find maps of the same mapmaker and of the same area, published in different years, or new editions due to integration of cartographic, such us in national cartographic series. These maps have the same projective system and the same cut, but they present very small differences. The manual comparison can be very difficult and with uncertain results, because it’s easy to leave some particulars out. It is necessary to find an automatic procedure to compare these maps and a solution can be given by digital maps comparison.
In the last years our experience in cartographic data processing was opted for find new tools for digital comparison and today solution is given by a new software, ACM (Automatic Correlation Map), which finds areas that are candidate to contain differences between two maps. ACM is based on image matching, a key component in almost any image analysis process.
Interesting paper but it presupposes a closeness of the maps that is likely to be missing when comparing CIA maps to other maps of the same places and time period.
I am in the process of locating other tools for map comparison.
Numerous false news accounts are circulating about president-elect Trump and the Emoluments Clause.
The story line is that Trump must divest himself of numerous businesses to avoid violating the “Emoluments Clause” of the U.S. Constitution. But when you read the Emoluments Clause:
Clause 8. No Title of Nobility shall be granted by the United States: And no Person holding any Office of Profit or Trust under them, shall, without the Consent of the Congress accept of any present, Emolument, Office, or Title, of any kind whatever, from any King, Prince, or foreign State.
that conclusion is far from clear.
Why would it say: “…without the Consent of Congress….”
That question was answered in 1871 and sheds light on the issue of today:
In 1871 the Attorney General of the United States ruled that: “A minister of the United States abroad is not prohibited by the Constitution from rendering a friendly service to a foreign power, even that of negotiating a treaty for it, provided he does not become an officer of that power . . . but the acceptance of a formal commission, as minister plenipotentiary, creates an official relation between the individual thus commissioned and the government which in this way accredits him as its representative,” which is prohibited by this clause of the Constitution. 2013
ftnt: 2013 13 Ops. Atty. Gen. 538 (1871).
All of that is from: Constitution Annotated | Congress.gov | Library of Congress, in particular: https://www.congress.gov/content/conan/pdf/GPO-CONAN-REV-2016-9-2.pdf.
If you read the Emoluments Clause to prohibit Trump from representing another government, unless Congress consents, it makes sense as written.
Those falsely claiming that Trump must divest himself of his business interests and/or put them in a blind trust under the Emoluments Clause, Lawrence Tribe comes to mind, are thinking of a tradition of presidents using blind trusts.
But tradition doesn’t amend the Constitution.
Any story saying that the Emoluments Clause compels president-elect Trump to either divest himself of assets and/or use a blind trust are false.
PS: I have admired Prof. Lawrence Tribe’s work for years and am saddened that he is willing to sully his reputation in this way.
People not infrequently complain that Stanford CoreNLP is slow or takes a ton of memory. In some configurations this is true. In other configurations, this is not true. This section tries to help you understand what you can or can’t do about speed and memory usage. The advice applies regardless of whether you are running CoreNLP from the command-line, from the Java API, from the web service, or from other languages. We show command-line examples here, but the principles are true of all ways of invoking CoreNLP. You will just need to pass in the appropriate properties in different ways. For these examples we will work with chapter 13 of Ulysses by James Joyce. You can download it if you want to follow along.
You have to appreciate the use of a non-trivial text for advice on speed and memory usage of CoreNLP.
How does your text stack up against Chapter 13 of Ulysses?
I’m supposed to be reading Ulysses long distance with a friend. I’m afraid we have both fallen behind. Perhaps this will encourage me to have another go at it.
What favorite or “should read” text would you use to practice with CoreNLP?
That’s one take on the events that might have led to today’s New York Times expose: it seems Facebook has tasked its development teams with “quietly develop[ing] software to suppress posts from appearing in people’s news feeds in specific geographic areas”.
As “current and former Facebook employees” told the Times, Facebook wouldn’t do the suppression themselves, nor need to. Rather:
It would offer the software to enable a third party – in this case, most likely a partner Chinese company – to monitor popular stories and topics that bubble up as users share them across the social network… Facebook’s partner would then have full control to decide whether those posts should show up in users’ feeds.
This is a step beyond the censorship Facebook has already agreed to perform on behalf of governments such as Turkey, Russia and Pakistan. In those cases, Facebook agreed to remove posts that had already “gone live”. If this software were in use, offending posts could be halted before they ever appeared in a local user’s news feed.
You can’t filter your own Facebook timeline or share your filter with other Facebook users, but the Chinese government can filter the timelines of 721,000,000+ internet users?
Stronger detection. The most important thing we can do is improve our ability to classify misinformation. This means better technical systems to detect what people will flag as false before they do it themselves.
Easy reporting. Making it much easier for people to report stories as fake will help us catch more misinformation faster.
Third party verification. There are many respected fact checking organizations and, while we have reached out to some, we plan to learn from many more.
Warnings. We are exploring labeling stories that have been flagged as false by third parties or our community, and showing warnings when people read or share them.
Related articles quality. We are raising the bar for stories that appear in related articles under links in News Feed.
Disrupting fake news economics. A lot of misinformation is driven by financially motivated spam. We’re looking into disrupting the economics with ads policies like the one we announced earlier this week, and better ad farm detection.
Listening. We will continue to work with journalists and others in the news industry to get their input, in particular, to better understand their fact checking systems and learn from them.
Enthrone Zuckerman as Censor of the Internet.
His blinding lust to be Censor of the Internet*, is responsible for Zuckerman passing up $millions if not $billions in filtering revenue.
* Zuckerberg’s “lust” to be “Censor of the Internet” is an inference based on the Facebook centered nature of his “ideas” for dealing with “fake news.” Unpaid censorship instead of profiting from user-centered filtering is a sign of poor judgment and/or madness.
There have been so many conversations on the impact of fake news on the recent US elections. An already polarized public is pushed further apart by stories that affirm beliefs or attack the other side. Yes. Fake news is a serious problem that should be addressed. But by focusing solely on that issue, we are missing the larger, more harmful phenomenon of misleading, biased propaganda.
It’s not only fringe publications. Think for a moment about the recent “Hamilton”-Pence showdown. What actually happened there? How disrespectful was the cast towards Mike Pence? Was he truly being “Booed Like Crazy” as the Huffington Post suggests? The short video embedded in that piece makes it seem like it. But this video on ABC suggests otherwise. “There were some cheers and some boos,” says Pence himself.
In an era of post-truth politics, driven by the 24-hour news cycle, diminishing trust in institutions, rich visual media, and the ubiquity and velocity of social networked spaces, how do we identify information that is tinted — information that is incomplete, that may help affirm our existing beliefs or support someone’s agenda, or that may be manipulative — effectively driving a form of propaganda?
Biased information — misleading in nature, typically used to promote or publicize a particular political cause or point of view — is a much more prevalent problem than fake news. It’s a problem that doesn’t exist only within Facebook but across social networks and other information-rich services (Google, YouTube, etc.).
A compelling piece of work but I disagree that biased information “….is a much more prevalent problem than fake news.”
I don’t disagree with Lotan’s “facts.” I would go further and say all information is “biased,” from one viewpoint or another.
Collecting, selecting and editing information are done to attract readers by biased individuals for delivery to biased audiences. Biased audiences who are driving the production of content which they find agreeable.
Non-news example: How long would a classical music record label survive insisting its purchasers enjoy rap music?
At least if they were attempting to use a classical music mailing list for their records?
To blame “news/opinion” writers for bias is akin to shooting the messenger.
A messenger who is delivering the content readers requested.
Take Lotan’s example of providing more “context” for a story drawn from the Middle East:
A more recent example from the Middle East is that of Ahmed Manasra, a 13-year old Palestinian-Israeli boy who stabbed a 13-year old Israeli Jew in Jerusalem last Fall. A video [warning: graphic content] that was posted to a public Facebook page shows Mansara wounded, bleeding, and being cursed at by an Israeli. It was viewed over 2.5M times with the following caption:
Israeli Zionists curse a dying Palestinian child as Israeli Police watch…. His name was Ahmad Manasra and his last moments were documented in this video.
But neither the caption nor the video itself presents the full context. Just before Manasra was shot, he stabbed a few passersby, as well as a 13-year old Israeli Jew. Later, he was taken to a hospital.
Lotan fails to mention Ahmad Manasra’s actions were in the context of a decades old, systematic campaign by the Israeli government (not the Israeli people) to drive Palestinians from illegally occupied territory. A campaign in which thousands of Palestinians have died, homes and olive groves have been destroyed, etc.
Bias? Context? Your call.
Whichever way you classify my suggested “additional” context for the story of Ahmad Manasra, will be considered needed correction by some and bias by others.
In his conclusion, Lotan touches every so briefly on the issue upper most in my mind when discussion “fake” or “biased” content:
There are other models of automated filtering and downgrading for limiting the spread of misleading information (the Facebook News Feed already does plenty of filtering and nudging). But again, who decides what’s in or out, who governs? And who gets to test the potential bias of such an algorithmic system?
In a nutshell: who governs?
Despite unquestioned existence of “false,” “fake,” “biased,” “misleading,” information, “who governs?,” has only one acceptable answer:
Enabling readers to discover, if they wish, alternative, or in the view of some, more complete or contextual accounts, great! We have the beginnings of technology to do so.
A story could be labeled “false,” “fake,” by NPR and if you subscribe to NPR labeling, that appears in your browser. Perhaps I subscribe to Lady GaGa labeling and it has no opinion on that story and unfortunate subscribers to National Review labeling see a large green $$$ or whatever it is they use to show approval.
I fear censors far more than any form or degree of “false,” “fake,” “biased,” “misleading,” information.
Despite R’s popularity, it is still very daunting to learn R as R has no click-and-point feature like SPSS and learning R usually takes lots of time. No worries! As self-R learner like us, we constantly receive the requests about how to learn R. Besides hiring someone to teach you or paying tuition fees for online courses, our suggestion is that you can also pick up some books that fit your current R programming level. Therefore, in this post, we would like to share some good books that teach you how to learn programming in R based on three levels: elementary, intermediate, and advanced levels. Each level focuses on one task so you will know whether these books fit your needs. While the following books do not necessarily focus on the task we define, you should focus the task when you reading these books so you are not lost in contexts.
Books and reading form the core of my most basic prejudice: Literacy is the doorway to unlimited universes.
A prejudice so strong that I have to work hard at realizing non-literates live in and sense worlds not open to literates. Not less complex, not poorer, just different.
But book lists in particular appeal to that prejudice and since my blog is read by literates, I’m indulging that prejudice now.
Domoske does a credible summary of the contents of the executive summary, for which only one paragraph is necessary to opt out of presenting this story on NPR:
When we began our work we had little sense of the depth of the problem. We even found ourselves rejecting ideas for tasks because we thought they would be too easy. Our first round of piloting shocked us into reality. Many assume that because young people are fluent in social media they are equally savvy about what they find there. Our work shows the opposite. We hope to produce a series of high-quality web videos to showcase the depth of the problem revealed by students’ performance on our tasks and demonstrate the link between digital literacy and citizenship. By drawing attention to this connection, a series of videos could help to mobilize educators, policymakers, and others to address this threat to democracy.
Comparing the NPR coverage and the executive summary, the article reflects the steps taken by the study, but never questions its conclusion that an inability to assess online information is indeed a “threat to democracy.”
To support that conclusion, which earned this story a spot on NPR, the researchers would need historical data on how well or poorly, students assessed sources of information at other time periods in American history, along with an assessment of “democracy” at the time, along with the demonstration of a causal relationship between the two.
But as you can see from the NPR article, Domoske fails to ask the most rudimentary questions about this study, such as:
“Is there a relationship between democracy and the ability to evaluate sources of information?”
Or, “What historical evidence demonstrates a relationship between democracy and the ability to evaluate sources of information?”
Utter silence on the part of Domoske.
The real headline for a follow-up on this story should be:
NPR Reporter Unable To Distinguish Credible Research From Headline Driven Reports.
Glenn descends into the sulking with the Times when he writes:
… Look at the ads
A profusion of pop-up ads or other advertising indicates you should handle the story with care. Another sign is a bunch of sexy ads or links, designed to be clicked — “Celebs who did Porn Movies” or “Naughty Walmart Shoppers Who have no Shame at All” — which you generally do not find on legitimate news sites.
The examples are nearly Facebook ad headlines and Glenn knows that.
Rather than saying “Facebook,” Glenn wants you to conclude that “on your own.” (An old manipulation/propaganda technique.)
Glenn’s “read the article closely” was #4, coming in after #1, “determine whether the article is from a legitimate website,” #2, “Check the ‘contact us’ page,” or #3, “examine the byline of the reporter and see whether it makes sense.”
I cannot, and do not wish to, imagine the U.S. without its National Park system. The sale and/or despoliation of this more than 80 million acres of mountain, forest, stream, ocean, geyser, cavern, canyon, and every other natural formation North America contains would diminish the country immeasurably. “National parks,” wrote novelist Wallace Stegner, “are the best idea we ever had. Absolutely American, absolutely democratic, they reflect us at our best rather than our worst.”
Stegner’s quote—which gave Ken Burns’ National Parks documentary its subtitle–can sound overoptimistic when we study the parks’ history. Though not officially designated until the 20th century, the idea stretches back to 1851, when a battalion, intent on finding and destroying an Indian village, also found Yosemite. Named for what the soldiers thought was the tribe they killed and burned, the word actually translates as “they are killers.”
Westward expansion and the annexation of Hawaii have left us many sobering stories like that of Yosemite’s “discovery.” And during their development in the early- to mid-20th century, the parks often required the mass displacement of people, many of whom had lived on the land for decades—or centuries. But despite the bloody history, the creation of these sanctuaries have preserved the country’s embarrassment of natural beauty and irreplaceable biodiversity for a century now. (The National Park Service celebrated its 100th anniversary just this past August.)
The National Park Service and its allies have acted as bulwarks against privateers who would turn places like Yosemite into prohibitively expensive resorts, and perhaps fell the ancient Redwood National forests or blast away the Smokey Mountains. Instead, the parks remain “absolutely democratic,” open to all Americans and international visitors, the pride of conservationists, scientists, hikers, bird watchers, and nature-lovers of all kinds. Given the sprawling, idealistic, and violent history of the National Parks, it may be fair to say that these natural preserves reflect the country at both its worst and its best. And in that sense, they are indeed “absolutely American.”
Links to numerous resources, including National Parks Maps. (Home of 1,198 free high resolution maps of U.S. national parks.)
The national parks of the United States were born in violence and disenfranchisement of the powerless. It is beyond our power to atone for those excesses and injuries done in the past.
It is our task, to preserve those parks as monuments to our violence against the powerless and as natural treasures for all humanity.
Posted in Government, Mapping, Maps | Comments Off on 1,198 Free High Resolution Maps of U.S. National Parks
CAUTIOUS COMPUTER USERS put a piece of tape over their webcam. Truly paranoid ones worry about their devices’ microphones—some even crack open their computers and phones to disable or remove those audio components so they can’t be hijacked by hackers. Now one group of Israeli researchers has taken that game of spy-versus-spy paranoia a step further, with malware that converts your headphones into makeshift microphones that can slyly record your conversations.
Researchers at Israel’s Ben Gurion University have created a piece of proof-of-concept code they call “Speake(a)r,” designed to demonstrate how determined hackers could find a way to surreptitiously hijack a computer to record audio even when the device’s microphones have been entirely removed or disabled. The experimental malware instead repurposes the speakers in earbuds or headphones to use them as microphones, converting the vibrations in air into electromagnetic signals to clearly capture audio from across a room.
“People don’t think about this privacy vulnerability,” says Mordechai Guri, the research lead of Ben Gurion’s Cyber Security Research Labs. “Even if you remove your computer’s microphone, if you use headphones you can be recorded.”
But the Ben Gurion researchers took that hack a step further. Their malware uses a little-known feature of RealTek audio codec chips to silently “retask” the computer’s output channel as an input channel, allowing the malware to record audio even when the headphones remain connected into an output-only jack and don’t even have a microphone channel on their plug. The researchers say the RealTek chips are so common that the attack works on practically any desktop computer, whether it runs Windows or MacOS, and most laptops, too. RealTek didn’t immediately respond to WIRED’s request for comment on the Ben Gurion researchers’ work. “This is the real vulnerability,” says Guri. “It’s what makes almost every computer today vulnerable to this type of attack.”
(emphasis in original)
Wired doesn’t give up any more details but that should be enough to get you started.
You must search for RealTek audio codec datasheets. RealTek wants a signed NDA from a development partner before you can access the datasheets.
Among numerous others, I know for a fact that datasheets on ALC655, ALC662, ALC888, ALC1150, and ALC5631Q are freely available online.
You will have to replicate the hack but then:
Choose your targets for taping
Obtain their TV/music preferences from Amazon, etc.
License new content (would not want to upset the RIAA) for web streaming
Offer your target the “latest” TV/music by (name) for free 30 day trial
For the nosy non-hacker, expect to see “hacked” earphones for sale on the Dark Web.
Perhaps even in time for holiday shopping!
Warning:Hacking or buying hacked headphones is a violation of any number of federal, state and local laws, depending on your jurisdiction.
PS: I am curious if the mic in cellphones is subject to a similar hack.
Perhaps this is the dawning of the age of transparency. 😉
Visual narrative is often a combination of explicit information and judicious omissions, relying on the viewer to supply missing details. In comics, most movements in time and space are hidden in the “gutters” between panels. To follow the story, readers logically connect panels together by inferring unseen actions through a process called “closure”. While computers can now describe the content of natural images, in this paper we examine whether they can understand the closure-driven narratives conveyed by stylized artwork and dialogue in comic book panels. We collect a dataset, COMICS, that consists of over 1.2 million panels (120 GB) paired with automatic textbox transcriptions. An in-depth analysis of COMICS demonstrates that neither text nor image alone can tell a comic book story, so a computer must understand both modalities to keep up with the plot. We introduce three cloze-style tasks that ask models to predict narrative and character-centric aspects of a panel given n preceding panels as context. Various deep neural architectures underperform human baselines on these tasks, suggesting that COMICS contains fundamental challenges for both vision and language.
From the introduction:
Comics are fragmented scenes forged into full-fledged stories by the imagination of their readers. A comics creator can condense anything from a centuries-long intergalactic war to an ordinary family dinner into a single panel. But it is what the creator hides from their pages that makes comics truly interesting, the unspoken conversations and unseen actions that lurk in the spaces (or gutters) between adjacent panels. For example, the dialogue in Figure 1 suggests that between the second and third panels, Gilda commands her snakes to chase after a frightened Michael in some sort of strange cult initiation. Through a process called closure , which involves (1) understanding individual panels and (2) making connective inferences across panels, readers form coherent storylines from seemingly disparate panels such as these. In this paper, we study whether computers can do the same by collecting a dataset of comic books (COMICS) and designing several tasks that require closure to solve.
(emphasis in original)
Comic book security: A method for defeating worldwide data slurping and automated analysis.
The authors find that human results easily exceed automated analysis, raising the question of the use of a mixture of text and images as a means to evade widespread data sweeps.
Security based on a lack of human eyes to review content is chancy but depending upon your security needs, it may be sufficient.
For example, a cartoon in a local newspaper that designates a mission target and time, only needs to be secure from the time of its publication until the mission has finished. That it is discovered days, weeks or even months later, doesn’t impact the operational security of the mission.
Following the experiment, the researchers came up with a technique of exfiltration based on their newly established 10 commandments. According to the SafeBreach presentation, these commandments are:
No security through obscurity should be used.
Only Web browsing and derived traffic is allowed.
Anything that may theoretically be perceived as passing information is forbidden.
Scrutinize every packet during comprehensive network monitoring.
Assume TLS/SSL termination at the enterprise level.
Assume the receiving party has no restrictions.
Assume no nation-state or third-party site monitoring.
Enable time synchronization between the communicating parties.
There’s bonus points for methods that can be implemented manually from the sender side.
Active disruption by the enterprise is always possible.
The technique discussed is criticized as “low bandwidth” but then I think, how much bandwidth does it take to transmit an admin login and password?
Definitely worth a slow read.
Other contenders for similar 10 commandments of exflitration?
As a trivial example, consider a sender who leaves work every day at the same time through a double door. If they exit to their right, it is a 0 and if they exit to their left, it is a 1. Perhaps only on set days of the week or month.
Very low bandwidth but as I said, for admin login/password, it would be sufficient.
The Egyptological museum search is a PHP tool aimed to facilitate locating the descriptions and images of ancient Egyptian objects in online catalogues of major museums. Online catalogues (ranging from selections of highlights to complete digital inventories) are now offered by almost all major museums holding ancient Egyptian items and have become indispensable in research work. Yet the variety of web interfaces and of search rules may overstrain any person performing many searches in different online catalogues.
Egyptological museum search was made to provide a single search point for finding objects by their inventory numbers in major collections of Egyptian antiquities that have online catalogues. It tries to convert user input into search queries recognised by museums’ websites. (Thus, for example, stela Geneva D 50 is searched as “D 0050,” statue Vienna ÄS 5046 is searched as “AE_INV_5046,” and coffin Turin Suppl. 5217 is searched as “S. 05217.”) The following online catalogues are supported:
Adam’s notes from the Vanderbilt University XQuery Working Group:
We evaluated and manipulated text data (i.e., strings) within Extensible Markup Language (XML) using string functions in XQuery, an XML query language, and BaseX, an XML database engine and XQuery processor. This tutorial covers the basics of how to use XQuery string functions and manipulate text data with BaseX.
We used a limited dataset of English words as text data to evaluate and manipulate, and I’ve created a GitHub gist of XML input and XQuery code for use with this tutorial.
A quick run-through of basic XQuery string functions that takes you up to writing your own XQuery function.
While we wait for more reports from the Vanderbilt University XQuery Working Group, have you considered using XQuery to impose different views on a single text document?
For example, some Bible translations follow the “traditional” chapter and verse divisions (a very late addition), while others use paragraph level organization and largely ignore the tradition verses.
Creating a view of a single source text as either one or both should not involve permanent changes to a source file in XML. Or at least not the original source file.
If for processing purposes there was a need for a static file rendering one way or the other, that’s doable but should be separate from the original XML file.
Posted in BaseX, XML, XQuery | Comments Off on Manipulate XML Text Data Using XQuery String Functions
I asked on Twitter today what Linux things they would like to know more about. I thought the replies were really cool so here’s a list (many of them could be discussed on any Unixy OS, some of them are Linux-specific)
I count forty-seven (47) entries on Julia’s list, which should keep you busy through any holiday!
Posted in Linux OS | Comments Off on Things to learn about Linux
A powerful heap corruption vulnerability exists in the gstreamer decoder for the FLIC file format. Presented here is an 0day exploit for this vulnerability.
This decoder is generally present in the default install of modern Linux desktops, including Ubuntu 16.04 and Fedora 24. Gstreamer classifies its decoders as “good”, “bad” or “ugly”. Despite being quite buggy, and not being a format at all necessary on a modern desktop, the FLIC decoder is classified as “good”, almost guaranteeing its presence in default Linux installs.
Thanks to solid ASLR / DEP protections on the (some) modern 64-bit Linux installs, and some other challenges, this vulnerability is a real beast to exploit.
But in order to attack the FLIC decoder, there simply isn’t any scripting opportunity. The attacker gets, once, to submit a bunch of scriptless bytes into the decoder, and try and gain code execution without further interaction…
… and good luck with that! Welcome to the world of scriptless exploitation in an ASLR environment. Let’s give it our best shot.
Above my head, at the moment, but I post it as a test for hackers who want to test their understanding/development of exploits.
BTW, some wag, I didn’t bother to see which one, complained Chris’ post is “irresponsible disclosure.”
Sure, the CIA, FBI, NSA and their counter-parts in other governments, plus their cybersecurity contractors should have sole access to such exploits. Ditto for the projects concerned. (NOT!)
“Responsible disclosure” is just another name for unilateral disarmament, on behalf of all of us.
Open and public discussion is much better.
Besides, a hack of Ubuntu 16.04 won’t be relevant at most government installations for years.
Dr Johanna Green is a lecturer in Book History and Digital Humanities at the University of Glasgow. Her PhD (English Language, University of Glasgow 2012) focused on a palaeographical study of the textual division and subordination of the Exeter Book manuscript. Here, she tells us about the first of two sessions she led for the Society of Northumbrian Scribes, a group of calligraphers based in North East England, bringing palaeographic research and modern-day calligraphy together for the public.
(emphasis in original)
Not phrased in subject identity language, but concerns familiar to the topic map community are not far away:
My own research centres on the scribal hand of the manuscript, specifically the ways in which the poems are divided and subdivided from one another and the decorative designs used for these litterae notabiliores throughout. For much of my research, I have spent considerable time (perhaps more than I am willing to admit) wondering where one ought to draw the line with palaeography. When do the details become so tiny to no longer be of any significance? When are they just important enough to mean something significant for our understanding of how the manuscript was created and arranged? How far am I willing to argue that these tiny features have significant impact? Is, for example, this littera notabilior Đ on f. 115v (Judgement Day I, left) different enough in a significant way to this H on f.97v, (The Partridge, bottom right), and in turn are both of these litterae notabiliores performing a different function than the H on f.98r (Soul and Body II, far right)?
(emphasis in original, footnote omitted)
When Dr. Green says:
…When do the details become so tiny to no longer be of any significance?…
I would say: When do the subjects (details) become so tiny we want to pass over them in silence? That is they could be but are not represented in a topic map.
Green ends her speculation, to a degree, by enlisting scribes to re-create the manuscript of interest under her observation.
I’ll leave her conclusions for her post but consider a secondary finding:
The experience also made me realise something else: I had learned much by watching them write and talking to them during the process, but I had also learned much by trying to produce the hand myself. Rather than return to Glasgow and teach my undergraduates the finer details of the script purely through verbal or written description, perhaps providing space for my students to engage in the materials of manuscript production, to try out copying a script/exemplar for themselves would help increase their understanding of the process of writing and, in turn, deepen their knowledge of the constituent parts of a letter and their significance in palaeographic endeavour. This last is something I plan to include in future palaeography teaching.
Dr. Green’s concern over palaeographic detail illustrates two important points about topic maps:
Potential subjects for a topic map are always unbounded.