Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

May 21, 2016

TSA Cybersecurity Failures – The Good News

Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 8:48 pm

The TSA is failing spectacularly at cybersecurity by Violet Blue.

From the post:

Five years of Department of Homeland Security audits have revealed, to the surprise of few and the dismay of all, that the TSA is as great at cybersecurity as it is at customer service.

The final report from the DHS Office of Inspector General details serious persistent problems with TSA staff’s handling of IT security protocols. These issues include servers running software with known vulnerabilities, no incident report process in place, and zero physical security protecting critical IT systems from unauthorized access.

What we’re talking about here are the very basics of IT security, and the TSA has been failing at these quite spectacularly for some time.

Violet reports on a cornucopia of cybersecurity issues with the TSA and its information systems. Including:


As part of this year’s final report, auditors watched TSA staff as they scanned STIP servers located at two DHS data centers and the Orlando International Airport. The scans “detected a total of 12,282 high vulnerabilities on 71 of the 74 servers tested.”

The redacted final report omits the names of the servers and due to space concerns (its only 47 pages long), omits the particulars of the 12,282 high vulnerabilities found. (That’s my assumption, the report doesn’t say that.)

What the report fails to mention is the good news about TSA cybersecurity failures:

Despite its woeful performance on cybersecurity and its utter failure to ever stop a terrorist, there have been no terrorist incidents on US airlines at points guarded by the TSA.

The TSA and its faulty cybersecurity equipment could be retired, en masse, and its impact on the incidence of terrorism on U.S. based air travel would be exactly zero.

Unless you need hacking practice on poorly maintained systems, avoid the TSA and its broken IT systems. Who wants to brag about stealing a candy bar from a vending machine? Do you?

Any cyberoffense against the TSA and its systems will expose you to long prison sentences for breaching systems that make no difference. That’s the definition of a bad deal. Just don’t go there.

May 20, 2016

Ethereum Contracts – Future Hacker Candy

Filed under: Cybersecurity,Security — Patrick Durusau @ 1:56 pm

Ethereum Contracts Are Going To Be Candy For Hackers by Peter Vessenes.

From the post:

Smart Contracts and Programming Defects

Ethereum promises that contracts will ‘live forever’ in the default case. And, in fact, unless the contract contains a suicide clause, they are not destroyable.

This is a double-edged sword. On the one hand, the default suicide mode for a contract is to return all funds embedded in the contract to the owner; it’s clearly unworkable to have a “zero trust” system in which the owner of a contract can at will claim all money.

So, it’s good to let people reason about the contract longevity. On the other hand, I have been reviewing some Ethereum contracts recently, and the code quality is somewhere between “optimistic as to required quality” and “terrible” for code that is supposed to run forever.

Dan Mayer cites research showing industry average bugs per 1000 lines of code at 15-50 and Microsoft released code at 0.5 per 1000, and 0(!) defects in 500,000 lines of code for NASA, with a very expensive and time consuming process.

Ethereum Smart Contract Bugs per Line of Code exceeds 100 per 1000

My review of Ethereum Smart Contracts available for inspection at dapps.ethercasts.com shows a likely error rate of something like 100 per 1000, maybe higher.

If you haven’t seen Ethereum, now is the time to visit.

From the homepage:

Ethereum is a decentralized platform that runs smart contracts: applications that run exactly as programmed without any possibility of downtime, censorship, fraud or third party interference.

These apps run on a custom built blockchain, an enormously powerful shared global infrastructure that can move value around and represent the ownership of property. This enables developers to create markets, store registries of debts or promises, move funds in accordance with instructions given long in the past (like a will or a futures contract) and many other things that have not been invented yet, all without a middle man or counterparty risk.

The project was crowdfunded during August 2014 by fans all around the world. It is developed by the Ethereum Foundation, a Swiss nonprofit, with contributions from great minds across the globe.

Early in the life cycle and some contracts will be better written than others.

Vulnerabilities will be Authors x Contracts so the future looks bright for hackers.

May 19, 2016

FindFace – Party Like It’s 2001

Filed under: Privacy,Security — Patrick Durusau @ 4:01 pm

What a difference fifteen years make!

Is Google or Facebook evil? Forget it!

Russian nerds have developed a new Face Recognition technology based app called FindFace, which is a nightmare for privacy lovers and human right advocates.

FindFace is a terrifyingly powerful facial recognition app that lets you photograph strangers in a crowd and find their real identity by connecting them to their social media accounts with 70% success rate, putting public anonymity at risk.

(From This App Lets You Find Anyone’s Social Profile Just By Taking Their Photo by Mohit Kumar)

Compare that breathless, “…nightmare for privacy lovers…public anonymity at risk…” prose to:

Super Bowl, or Snooper Bowl?

As 100,000 fans stepped through the turnstiles at Super Bowl XXXV, a camera snapped their image and matched it against a computerized police lineup of known criminals, from pickpockets to international terrorists.

It’s not a new kind of surveillance. But its use at the Super Bowl — dubbed “Snooper Bowl” by critics — has highlighted a debate about the balance between individual privacy and public safety.

Law enforcement officials say what was done at the Super Bowl is no more intrusive than routine video surveillance that most people encounter each day as they’re filmed in stores, banks, office buildings or apartment buildings.

But to critics, the addition of the face-recognition system can essentially put everyone in a police lineup.

“I think it presents a whole different picture of America,” said Howard Simon, executive director of the American Civil Liberties Union in Florida.

(From Biometrics Used to Detect Criminals at Super Bowl by Vickie Chachere)

If you don’t keep up with American football, Super Bowl XXXV was held in January of 2001.

Facial recognition being common in 2001, why the sudden hand wringing over privacy and FindFace?

Oh, I get it. It is the democratization of the loss of privacy.

Those whose privacy would be protected by privilege or position are suddenly fair game to anyone with a smartphone.

A judge coming out of a kinky bar can be erased or not noticed on police surveillance video, but in a smartphone image, not so much.

The “privacy” of the average U.S. citizen depends on the inattention of state actors.

I’m all for sharing our life-in-the-goldfish-bowl condition with the powerful and privileged.

Get FindFace and use it.

Create similar apps and use topic maps to bind the images to social media profiles.

When the State stops surveillance, perhaps, just perhaps, citizens can stop surveillance of the State. Maybe.

If “privacy” advocates object, ask them what surveillance by the State they support? If the answer isn’t “none,” they have chosen the side of power and privilege. What more is there to say? (BTW, take their photo with FindFace or a similar app.)

Allo, Allo, Google and the Government Can Both Hear You

Filed under: Government,Privacy,Security — Patrick Durusau @ 10:35 am

Google’s Allo fails to use end-to-end encryption by default by Graham Cluley.

The lack of end-to-end encryption by default in Google’s Allo might look like a concession to law enforcement.

Graham points out given the choice of no government or Google spying versus government and Google spying, Google chose the latter.

Anyone working on wrappers for apps to encrypt their output and/or to go dark in terms of reporting to the mother ship?

PS: Yes, Allo offers encryption you can “turn on” but will you trust encryption from someone who obviously wants to spy on you? Your call.

May 18, 2016

Best Served From The Ukraine [Aside on Jury Instruction Re FBI Evidence]

Filed under: Cybersecurity,Security — Patrick Durusau @ 4:21 pm

Experts Warn of Super-Stealthy Furtim Malware by Phil Muncaster.

From the post:

Security experts are warning of newly discovered credential-stealing malware which prioritizes stealth, scoring a 0% detection rate in VirusTotal.

Furtim, a Latin word meaning “by stealth,” was first spotted by researcher @hFireF0X and consists of a driver, a downloader and three payloads, according to enSilo researcher Yotam Gottesman.

The payloads are: a power-saving configuration tool which ensures a victim’s machine is always on and communicating with Furtim’s C&C server; Pony Stealer – a powerful commercial credential stealer; and a third file that communicates back to the server but has yet to be fully analyzed.

Interestingly, Furtim goes to great lengths to stay hidden, going well beyond most malware in checking for the presence of over 400 security tools on the targeted PC, Gottesman claimed.

Phil’s post summarizes some of the better ideas used in this particular bit of malware.

The post by enSilo researcher Yotam Gottesman includes this description:


Upon initial communication, Furtim collects unique information from the device it is running on, such as the computer name and installation date and sends that information to a specific server. The server stores the received details about the infected machine to ensure that the payload is sent only once.

That reminds me of the search warrant Ben Cox posted in Here Is the Warrant the FBI Used to Hack Over a Thousand Computers, which reads in part:

From any “activating” computer described in Attachment A:

1. The “activating” computer’s actual IP address, and the date and time that the NIT determines what that IP address is;

2. a unique identifier generated by the NIT (e.g., a series of numbers, letters, and/or special characters) to distinguish data from that of other “activating” comptuers, that will be sent with and collected by the NIT;

3. the type of operating system running on the computer, including type (e.g., Windows), version (e.g., Windows 7), and architecture (e.g., x 86);

4. information about whether the NIT has already been delivered to the “activating” computer;

5. the “activating” computer’s Host name;

6. the “activating” computer’s active operating system username; and

7. the “activating” computer’s media access control (“MAC”) address;

….

I mention that because if the FBI can’t prove its NIT’s capabilities against the users computer, who knows where they got the information they now claim to have originated from a child porn website?

Considering the FBI knowingly gave flawed testimony for twenty years, including in death penalty cases, when prosecutors were aware of those flaws, absence both source code and a demonstration of its use against the defendant’s computer as it existed then, the NIT evidence should be excluded at trial.

Or at the very least, a jury instruction that recites the FBI’s history of flawed technical testimony in detail and cautioning the jury that they should view all FBI “evidence” as originating from habitual liars.

Could be telling the truth, but that hasn’t been their habit. (Judicial notice of the FBI practice of providing flawed evidence.)

Password Security – Not Blaming Victims

Filed under: Cybersecurity,Security — Patrick Durusau @ 10:10 am

linkedIn-passwords-460

No, don’t waste your breath blaming victims.

Do use this list and similar lists as checks on allowable passwords.

One really good starting place would be: Today I Am Releasing Ten Million Passwords by Mark Burnett.

May 17, 2016

Mozilla/Tor Vulnerabilities – You Can Help!

Filed under: Cybersecurity,FBI,Security — Patrick Durusau @ 7:45 pm

You have probably heard the news that the FBI doesn’t have to reveal its Tor hack. Judge Changes Mind, Says FBI Doesn’t Have to Reveal Tor Browser Hack by Joseph Cox.

Which of course means that Mozilla isn’t going to get the hack fourteen days before the defense attorneys do.

While knowing the FBI hack would help fix that particular vulnerability, it would not help fix any other Mozilla/Tor vulnerabilities.

Rather than losing any sleep or keystrokes over the FBI’s one hack, clasped in its grubby little hands, contribute to the discovery and more importantly, fixing of vulnerabilities in Mozilla and Tor.

Let the FBI have its one-trick pony. From what I understand you had to have Flash installed for it to work.

Flash? Really?

Flash users need to mirror their SSN, address, hard drives, etc., to public FTP site. At least then you will have a record of when your data is stolen, I mean downloaded.

Whether vulnerabilities persist in Mozilla/Tor isn’t up to the FBI. It’s up to you.

Your call.

May 16, 2016

Censored SIDtoday File Release

Filed under: Cybersecurity,Government,NSA,Security — Patrick Durusau @ 6:52 pm

Snowden Archive — The SIDtoday Files

From the post:

The Intercept’s first SIDtoday release comprises 166 articles, including all articles published between March 31, 2003, when SIDtoday began, and June 30, 2003, plus installments of all article series begun during this period through the end of the year. Major topics include the National Security Agency’s role in interrogations, the Iraq War, the war on terror, new leadership in the Signals Intelligence Directorate, and new, popular uses of the internet and of mobile computing devices.

Along with this batch, we are publishing the stories featured below, which explain how and why we’re releasing these documents, provide an overview of SIDtoday as a publication, report on one especially newsworthy set of revelations, and round up other interesting tidbits from the files.

There are a series of related stories with this initial release:

The Intercept is Broadening Access to the Snowden Archive. Here’s Why by Glenn Greenwald.

NSA Closely Involved in Guantánamo Interrogations, Documents Show by Cora Currier.

The Most Intriguing Spy Stories From 166 Internal NSA Reports by Micah Lee, Margot Williams.

What It’s Like to Read the NSA’s Newspaper for Spies by Peter Maass.

How We Prepared the NSA’s Sensitive Internal Reports for Release by The Intercept.

A master zip file has all the SIDtoday files released thus far.

Comments on the censoring of these files will follow.

Office of Personnel Management Upgrade Crashes and Burns

Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 4:32 pm

You may remember Flash Audit on OPM Infrastructure Update Plan which gave you a summary of the Inspector General for the Office of Personnel Management (OPM) report on OPM’s plans to upgrade its IT structure.

Unfortunately for U.S. taxpayers and people whose records are held by the OPM, the Inspector General doesn’t have veto power over the mis-laid plans of the OPM.

As a consequence, we read today:

Contractor Working on OPM’s Cyber Upgrades Suddenly Quits, Citing ‘Financial Distress” by Jack Moore.

From the post:

The contractor responsible for the hacked Office of Personnel Management’s major IT overhaul is now in financial disarray and no longer working on the project.

OPM awarded the Arlington, Virginia-based Imperatis Corporation a sole-source contract in June 2014 as part of an initial $20 million effort to harden OPM’s cyber defenses, after agency officials discovered an intrusion into the agency’s network.

In the past week, however, Imperatis ceased operations on the contract, citing “financial distress,” an OPM spokesman confirmed to Nextgov.

After Imperatis employees failed to show up for work May 9, OPM terminated Imperatis’ contract for nonperformance and defaulting on its contract.

“DHS and OPM are currently assessing the operational effect of the situation and expect there to be very little impact on current OPM operations,” OPM spokesman Sam Schumach said in a statement to Nextgov. Schumach said OPM had been planning for performance on the contract to end in June 2016.

Show of hands: Who is surprised by this news?

The Board of Directors/Advisors page for Imperatis is now blank.

To help you avoid becoming entangled with these individuals in future contacts, the Wayback Machine has a copy of their Board of Directors/Advisors as of March 31, 2016.

So you can identify the right people:

Board of Directors

CHARLES R. HENRY, CHAIRMAN OF THE BOARD

Retired Major General Charles (Chuck) Henry became Chairman of the Board of Directors in early 2013. Henry retired after 32 years in the U.S. Army, during which he held various important Quartermaster, mission-related, command, and staff positions. He was the Army’s first Competition Advocate General and reported directly to the Secretary of the Army. His overseas assignments included tours of duty in Vietnam, Europe, and Saudi Arabia. Henry is a member of the Army Quartermaster and Defense Logistics Agency Halls of Fame. In his last position with the federal government, he was the founder and first commander of the Defense Contract Management Command (DCMC).

Henry spent 20 years as a senior executive working in industry, serving as the CEO of five companies. He currently sits on two public boards, Molycorp (NYSE) and Gaming Partners International Corp (NASDAQ), and also sits on the Army Science Board, an advisory committee that makes recommendations on scientific and technological concerns to the U.S. Army.

SALLY DONNELLY

Sally Donnelly is founder and CEO of SBD Advisors, an international consulting and communications firm. Donnelly is also a senior advisor and North American representative to C5, a UK-based investment fund in safety and security markets.

Prior to founding SBD Advisors, Donnelly served as head of Washington’s office for U.S. Central Command. Donnelly was a key advisor to General Jim Mattis on policy issues, Congressional relations, communications, and engagements with foreign governments. Before joining U.S. Central Command, Donnelly was a Special Assistant to the Chairman of the Joint Chiefs of Staff, Admiral Mike Mullen.

Before joining the Chairman’s staff, Donnelly worked at Time Magazine for 21 years. Donnelly currently sits on the Board of the American Friends of Black Stork, a British-based military veterans’ charity and is a consultant to the Pentagon’s Defense Business Board.

ERIC T. OLSON

Retired Admiral Eric T. Olson joined the Imperatis Board in April 2013. Olson retired from the U.S. Navy in 2011 after more than 38 years of military service. He was the first Navy SEAL officer to be promoted to the three-star and four-star ranks. He served as head of the US Special Operations Command, where he was responsible for the mission readiness of all U.S. Army, Navy, Air Force, and Marine Corps Special Operations Forces.

Olson is now an independent national security consultant for private and public sector organizations as the president of the ETO Group. He is an adjunct professor in the School of International and Public Affairs at Columbia University and serves as director of Iridium Communications, Under Armour, the non-profit Special Operations Warrior Foundation, and the National Navy UDT-SEAL Museum.

MASTIN M. ROBESON

Retired Major General Mastin Robeson joined Imperatis as President and Chief Executive Officer in March 2013. Robeson retired in February 2010 after 34 years of active service in the U.S. Marine Corps, during which time he served in more than 60 countries. He commanded a Combined/Joint Task Force in the Horn of Africa, two Marine Brigades, two Marine Divisions, and Marine Corps Special Operations Command. He also served as Secretary of Defense William Cohen’s Military Assistant and General David Petraeus’ Director of Strategy, Plans, and Assessments. He has extensive strategic planning, decision-making, and crisis management experience.

Since retiring in 2010, Robeson has operated his own consulting company, assisting more than 20 companies in business development, marketing strategy, strategic planning, executive leadership, and crisis management. He has also served on three Boards of Directors, two Boards of Advisors, a college Board of Trustees, and a major hospital’s Operations Council.

BOARD OF ADVISORS

JAMES CLUCK

James (Jim) Cluck joined the Imperatis Board of Advisors in 2013. Cluck formerly served as acquisition executive, U.S. Special Operations Command. He was responsible for all special operations forces research, development, acquisition, procurement, and logistics.

Cluck held a variety of positions at USSOCOM, including program manager for both intelligence systems and C4I automation systems; Deputy Program Executive Officer for Intelligence and Information Systems; Director of Management for the Special Operations Acquisition and Logistics Center; and Chief Information Officer and Director for the Center for Networks and Communications. During these assignments, he consolidated diverse intelligence, command and control, and information programs through common migration and technical management techniques to minimize Major Force Program-11 resourcing and enhance interoperability.

ED WINTERS

Retired Rear Admiral Ed Winters joined the Imperatis Board of Advisors in September 2014. Winters retired from the U.S. Navy after more than 33 years of military service. As a Navy SEAL, he commanded at every level in the Naval Special Warfare community as well as serving two tours in Iraq under the Multi-National Security Transition Command (MNSTC-I). During his first tour with MNSTC-I he led the successful efforts to establish the Iraqi National Counter-Terrorism Task Force. During his second tour with MNSTC-I he served as Deputy Commander, overseeing the daily training and mentoring of the Iraqi Security Architecture and Government institutions. Since retiring, Winters has consulted to multiple corporations.

Should any of these individuals appear in any relationship with any contractor on a present or future contract, run the other way. Dig in your heels and refuse to sign any checks, contracts, etc.

Imperatis Corporation was once known as Jorge Scientific, which also crashed and burned. You can find their “leadership team” at the Wayback Machine as well.

You have to wonder how many Imperatis and Jorge Scientific “leaders” are involved in other government contracts.

Suggestions for a good starting place to root them out?

Shame! Shame! John McAfee Tricks Illiterates

Filed under: Cybersecurity,Security — Patrick Durusau @ 3:10 pm

My day started with reading WhatsApp Message Hacked By John McAfee And Crew by Steve Morgan.

I thought it made the important point that while the WhatsApp message is secured by bank vault quality encryption:

Luxembourg_Bankmuseum_Tuer-w-note

By LoKiLeCh (Own work) [GFDL, CC-BY-SA-3.0, CC BY-SA 2.5-2.0-1.0, GFDL, CC-BY-SA-3.0 or CC BY-SA 2.5-2.0-1.0], via Wikimedia Commons

When you enlarge the little yellow note on the front (think Android) you find:

combination

While your message encryption may be Shannon secure end-to-end, the security of your OS, to say nothing of your personal, organizational, etc., security counts whether the message is indeed “secure.”

A better illustration would be to show McAfee and crew taking the vault out of the wall (think OS) but my graphic skills aren’t up to that task. 😉

That’s a useful lesson and to be honest, McAfee says as much, in the fifth paragraph of the story.

So I almost fell off my perch when later in the morning I read:

John McAfee Apparently Tried to Trick Reporters Into Thinking He Hacked WhatsApp by William Turton.

Here’s the lead paragraph:

John McAfee, noted liar and one-time creator of anti-virus software, apparently tried to convince reporters that he hacked the encryption used on WhatsApp. To do this, he attempted to send them phones with preinstalled malware and then convince them he was reading their encrypted conversations.

Just in case you don’t follow the “noted liar” link, that’s another post written by William Turton.

The “admitted lie” was one of simplification, compressing an iPhone hack into sound bite length.

Ever explain (attempt) computer technology to the c-suite? You are guilty of the same type of lies.

If someone divested themselves of their interest in WhatsApp because they didn’t read to the fifth paragraph of the original story, I’m sorry.

Read before you re-tweet/re-post and/or change your investments. Whether it’s a John McAfee story or not.

May 14, 2016

Consent/Anonymised Data Concerns For Nulled.io?

Filed under: Cybersecurity,Security — Patrick Durusau @ 10:06 pm

Famous Nulled.io Hacking Forum Suffers Devastating Data Breach by Catalin Cimpanu.

From the post:


According to security firm Risk Based Security, the leaked data was offered as a 1.3 GB tar archive that decompressed to a 9.45 GB db.sql file, which was a database dump of the entire forum’s database.

Everything from user accounts to private messages, and from VIP forum posts to financial transactions were included. More precisely, the data contained 536,064 user accounts, 800,593 user personal messages, 5,582 purchase records, and 12,600 invoices.

For each user, leaked data included his forum username, email address, hashed password, join date, IP records, and other forum-related tidbits such as titles and post counts.

Crime investigation agencies are most likely to be interested in this leak since it also includes 907,162 authentication logs with geolocation data that will allow them to tie various criminal activity to IPs, forum usernames, and email addresses.

I am waiting to see Oliver Keyes, OKCupid data and Scientific Censorship, ride in to condemn this unknown hacker for breaching the privacy of the users of Nulled.io and for the data not being anonymised.

Or in Oliver’s words on another data breach:

…this is without a doubt one of the most grossly unprofessional, unethical and reprehensible data releases I have ever seen.

I wonder where this one ranks?

Considering that criminal charges are a distinct possibility from the data breach?

I haven’t looked at the data, yet, but if hackers failed to take steps to conceal their identities on a site devoted to hacking, user education on security may be a lost cause.

May 13, 2016

Receding Trust In Internet Privacy

Filed under: Cybersecurity,Privacy,Security — Patrick Durusau @ 8:49 pm

You may have seen this post on Twitter:

trust-internet-01-450

So, what is this:

…single problem that we just can’t seem to solve[?]

The Washington Post headline was even more lurid: Why a staggering number of Americans have stopped using the Internet the way they used to.

The government post releasing the data was somewhat calmer: Lack of Trust in Internet Privacy and Security May Deter Economic and Other Online Activities by Rafi Goldberg.

Rafi writes:

Every day, billions of people around the world use the Internet to share ideas, conduct financial transactions, and keep in touch with family, friends, and colleagues. Users send and store personal medical data, business communications, and even intimate conversations over this global network. But for the Internet to grow and thrive, users must continue to trust that their personal information will be secure and their privacy protected.

NTIA’s analysis of recent data shows that Americans are increasingly concerned about online security and privacy at a time when data breaches, cybersecurity incidents, and controversies over the privacy of online services have become more prominent. These concerns are prompting some Americans to limit their online activity, according to data collected for NTIA in July 2015 by the U.S. Census Bureau. This survey included several privacy and security questions, which were asked of more than 41,000 households that reported having at least one Internet user.

Perhaps the most direct threat to maintaining consumer trust is negative personal experience. Nineteen percent of Internet-using households—representing nearly 19 million households—reported that they had been affected by an online security breach, identity theft, or similar malicious activity during the 12 months prior to the July 2015 survey. Security breaches appear to be more common among the most intensive Internet-using households. For example, while 9 percent of online households that used just one type of computing device (either a desktop, laptop, tablet, Internet-connected mobile phone, wearable device, or TV-connected device) reported security breaches, 31 percent of those using at least five different types of devices suffered this experience (see Figure 1).

No real surprises in the report until you reach:


NTIA’s initial analysis only scratches the surface of this important area, but it is clear that policymakers need to develop a better understanding of mistrust in the privacy and security of the Internet and the resulting chilling effects. In addition to being a problem of great concern to many Americans, privacy and security issues may reduce economic activity and hamper the free exchange of ideas online.

I’m sorry, given that almost 1 out of every 5 households surveyed had suffered from an online security breach, what is there to “…better understand…” about their mistrust?

The Internet, their computers and other online devices, etc., are all insecure.

What seems to be the problem with acknowledging that fact?

It’s mis-leading for the Washington Post to wave it hands and say this is a …single problem that we just can’t seem to solve.

Online services and computers can be made less insecure, but no computer system is completely secure. (Not even the ones used by the NSA. Remember Snowden.)

Nor can computer systems be less insecure without some effort from users.

I know, I know, I blaming all those users who get hacked. Teaching users to protect themselves has some chance of a positive outcome. Wringing your hands over poor hacked users that someone should be protecting has none.

Educate yourself about basic computer security and be careful out there. The number of assholes on the Internet seems to multiply geometrically. Even leaving state actors to one side.

Flawed Input Validation = Flawed Subject Recognition

Filed under: Cybersecurity,Security,Software,Subject Identity,Topic Maps — Patrick Durusau @ 7:47 pm

In Vulnerable 7-Zip As Poster Child For Open Source, I covered some of the details of two vulnerabilities in 7-Zip.

Both of those vulnerabilities were summarized by the discoverers:

Sadly, many security vulnerabilities arise from applications which fail to properly validate their input data. Both of these 7-Zip vulnerabilities resulted from flawed input validation. Because data can come from a potentially untrusted source, data input validation is of critical importance to all applications’ security.

The first vulnerability is described as:

TALOS-CAN-0094, OUT-OF-BOUNDS READ VULNERABILITY, [CVE-2016-2335]

An out-of-bounds read vulnerability exists in the way 7-Zip handles Universal Disk Format (UDF) files. The UDF file system was meant to replace the ISO-9660 file format, and was eventually adopted as the official file system for DVD-Video and DVD-Audio.

Central to 7-Zip’s processing of UDF files is the CInArchive::ReadFileItem method. Because volumes can have more than one partition map, their objects are kept in an object vector. To start looking for an item, this method tries to reference the proper object using the partition map’s object vector and the “PartitionRef” field from the Long Allocation Descriptor. Lack of checking whether the “PartitionRef” field is bigger than the available amount of partition map objects causes a read out-of-bounds and can lead, in some circumstances, to arbitrary code execution.

(code in original post omitted)

This vulnerability can be triggered by any entry that contains a malformed Long Allocation Descriptor. As you can see in lines 898-905 from the code above, the program searches for elements on a particular volume, and the file-set starts based on the RootDirICB Long Allocation Descriptor. That record can be purposely malformed for malicious purpose. The vulnerability appears in line 392, when the PartitionRef field exceeds the number of elements in PartitionMaps vector.

I would describe the lack of a check on the “PartitionRef” field in topic maps terms as allowing a subject, here a string, of indeterminate size. That is there is no constraint on the size of the subject, which is here a string.

That may seem like an obtuse way of putting it, but consider that for a subject, here a string that is longer than the “available amount of partition may objects,” can be in association with other subjects, such as the user (subject) who has invoked the application(association) containing the 7-Zip vulnerability (subject).

Err, you don’t allow users with shell access to suid root do you?

If you don’t, at least not running a vulnerable program as root may help dodge that bullet.

Or in topic maps terms, knowing the associations between applications and users may be a window on the severity of vulnerabilities.

Lest you think logging suid is an answer, remember they were logging Edward Snowden’s logins as well.

Suid logs may help for next time, but aren’t preventative in nature.

BTW, if you are interested in the details on buffer overflows, Smashing The Stack For Fun And Profit looks like a fun read.

Vulnerable 7-Zip As Poster Child For Open Source

Filed under: Compression,Cybersecurity,Open Source,Security — Patrick Durusau @ 12:28 pm

Anti-virus products, security devices affected by 7-Zip flaws by David Bisson.

From the post:


But users be warned. Cisco Talos recently discovered multiple vulnerabilities in 7-Zip that are more serious than regular security flaws. As explained in a blog post by Marcin Noga and Jaeson Schultz, two members of the Cisco Talos Security Intelligence & Research Group:

“These type of vulnerabilities are especially concerning since vendors may not be aware they are using the affected libraries. This can be of particular concern, for example, when it comes to security devices or antivirus products. 7-Zip is supported on all major platforms, and is one of the most popular archive utilities in-use today. Users may be surprised to discover just how many products and appliances are affected.”

Cisco Talos has identified two flaws in particular. The first (CVE-2016-2335) is an out-of-bounds read vulnerability that exists in the way 7-Zip handles Universal Disk Format (UDF) files. An attacker could potentially exploit this vulnerability to achieve arbitrary code execution.

The “many products and appliances” link results in:

7-zip-03-450

If you use the suggested search string:

7-zip-02-450

Every instance of software running a vulnerable 7-Zip library is subject to this hack. A number likely larger than the total 2,490,000 shown by these two searches.

For open source software, you can check to see if it has been upgraded to 7-Zip, version 16.0.

If you have non-open source software, how are you going to check for the upgrade?

Given the lack of liability under the usual EULA, are you really going to take a vendor’s word for the upgrade?

The vulnerable 7-Zip library is a great poster child for open source software.

Not only for the discovery of flaws but to verify vendors have properly patched those flaws.

May 12, 2016

OKCupid data and Scientific Censorship

Filed under: Cybersecurity,Privacy,Security — Patrick Durusau @ 2:40 pm

Scientific consent, data, and doubling down on the internet by Oliver Keyes.

From the post:

There is an excellent Tim Minchin song called If You Open Your Mind Too Much, Your Brain Will Fall Out. I’m sad to report that the same is also true of your data and your science.

At this point in the story I’d like to introduce you to Emil Kirkegaard, a self-described “polymath” at the University of Aarhus who has neatly managed to tie every single way to be irresponsible and unethical in academic publishing into a single research project. This is going to be a bit long, so here’s a TL;DR: linguistics grad student with no identifiable background in sociology or social computing doxes 70,000 people so he can switch from publishing pseudoscientific racism to publishing pseudoscientific homophobia in the vanity journal that he runs.

Yeah, it’s just as bad as it sounds.

The Data

Yesterday morning I woke up to a Twitter friend pointing me to a release of OKCupid data, by Kirkegaard. Having now spent some time exploring the data, and reading both public statements on the work and the associated paper: this is without a doubt one of the most grossly unprofessional, unethical and reprehensible data releases I have ever seen.

There are two reasons for that. The first is very simple; Kirkegaard never asked anyone. He didn’t ask OKCupid, he didn’t ask the users covered by the dataset – he simply said ‘this is public so people should expect it’s going to be released’.

This is bunkum. A fundamental underpinning of ethical and principled research – which is not just an ideal but a requirement in many nations and in many fields – is informed consent. The people you are studying or using as a source should know that you are doing so and why you are doing so.

And the crucial element there is “informed”. They need to know precisely what is going on. It’s not enough to simply say ‘hey, I handed them a release buried in a pile of other paperwork and they signed it’: they need to be explicitly and clearly informed.

Studying OKCupid data doesn’t allow me to go through that process. Sure: the users “put it on the internet” where everything tends to end up public (even when it shouldn’t). Sure: the users did so on a site where the terms of service explicitly note they can’t protect your information from browsing. But the fact of the matter is that I work in this field and I don’t read the ToS, and most people have a deeply naive view of how ‘safe’ online data is and how easy it is to backtrace seemingly-meaningless information to a real life identity.

In fact, gathering of the data began in 2014, meaning that a body of the population covered had no doubt withdrawn their information from the site – and thus had a pretty legitimate reason to believe that information was gone – when Kirkegaard published. Not only is there not informed consent, there’s good reason to believe there’s an implicit refusal of consent.

The actual data gathered is extensive. It covers gender identity, sexuality, race, geographic location; it covers BDSM interests, it covers drug usage and similar criminal activity, it covers religious beliefs and their intensity, social and political views. And it does this for seventy thousand different people. Hell, the only reason it doesn’t include profile photos, according to the paper, is that it’d take up too much hard-drive space.

Which nicely segues into the second reason this is a horrifying data dump: it is not anonymised in any way. There’s no aggregation, there’s no replacement-of-usernames-with-hashes, nothing: this is detailed demographic information in a context that we know can have dramatic repercussions for subjects.

This isn’t academic: it’s willful obtuseness from a place of privilege. Every day, marginalised groups are ostracised, excluded and persecuted. People made into the Other by their gender identity, sexuality, race, sexual interests, religion or politics. By individuals or by communities or even by nation states, vulnerable groups are just that: vulnerable.

This kind of data release pulls back the veil from those vulnerable people – it makes their outsider interests or traits clear and renders them easily identifiable to their friends and communities. It’s happened before. This sort of release is nothing more than a playbook and checklist for stalkers, harassers, rapists.

It’s the doxing of 70,000 people for a fucking paper.

I offer no defense for the Emil Kirkegaard’s paper, its methods or conclusions.

I have more sympathy for Oliver’s concerns over consent and anonymised data than say the International Consortium of Investigative Journalists (ICIJ) and their concealment of the details from the Panama Papers, but only just.

It is in the very nature of data “leaks” that no consent is asked of or given by those exposed by the “leak.”

Moreover, anonymised data sounds suspiciously like ICIJ saying they can protect the privacy of the “innocents” in the Panama Papers leak.

I don’t know, hiding from the tax man doesn’t raise a presumption of innocence to me. You?

Someone has to decide who are “innocents,” or who merits protection of anonymised data. To claim either one, means you have someone in mind to fill that august role.

In our gender-skewed academic systems, would that be your more than likely male department head?

My caveat to Oliver’s post is even with good intentions, the power to censor data releases is a very dangerous one. One that reinforces the power of those who possess it.

The less dangerous strategy is to teach users if information is recorded, it is leaked. Perhaps not today, maybe not tomorrow, but certainly by the day after that.

Choose what information you record carefully.

107,000 Anal Fisting Aficionados But No Senate Torture Report

Filed under: Cybersecurity,Porn,Security — Patrick Durusau @ 10:01 am

Huge embarrassment over fisting site data breach by John Leyden.

From the post:

A data breach at a forum for “anal fisting” has resulted in the exposure of 107,000 accounts.

Of course, ‘;–have i been pwned? plays the “I know something you don’t” game, loads the data but blocks searching.

I didn’t look hard for the data dump but for details sufficient to replicate this hack, see:

Another Day, Another Hack: Is Your Fisting Site Updating Its Forum Software? by Joseph Cox.

Quick search shows there are about 15K reports (including duplicates) on exposure of these 107,000 anal fisting aficionados.

It’s mildly amusing to think of the reactions of elected officials, military officers, etc., caught up in such data breach (sorry) but where is the full U.S. Senate Torture Report?

If you are going to risk jail time for hacking, shouldn’t it be for something more lasting than a list of anal fisters?

Is there a forum for nominating and voting on (anonymously) targets for hacking?

PS: Leaking data to ‘;–have i been pwned?, the International Consortium of Investigative Journalists or Wikileaks, etc., only empowers new exercises of privilege. Leak to them if you like but leak to the public as well.

May 11, 2016

Hunting Bugs In Porn Site (or How to Explain Your Browsing History)

Filed under: Cybersecurity,Porn,Security — Patrick Durusau @ 10:19 am

Pornhub Launches Bug Bounty Program; Offering Reward up to $25,000 by Swati Khandelwal.

From the post:


The world’s most popular pornography site PornHub has launched a bug bounty program for security researchers and bug hunters who can find and report security vulnerabilities in its website.

Partnered with HackerOne, PornHub is offering to pay independent security researchers and bug hunters between $50 and $25,000, depending upon the impact of vulnerabilities they find. (emphasis in the original)

As always, there are some exclusions:


Vulnerabilities such as cross-site request forgery (CSRF), information disclosure, cross domain leakage, XSS attacks via Post requests, HTTPS related (such as HSTS), HttpOnly and Secure cookie flags, missing SPF records and session timeout will not be considered for the bounty program.

I take “information disclosure” to mean that if your hack involves NSA credentials it doesn’t count. Well, you can’t make it too easy.

The program is in beta so see Swati’s post for further details.

This PornHub program benefits people asked awkward questions about their browsing history.

Yes, you were looking at PornHub or related sites. You were doing “security research.”

Being in HR or accounting may make that claim less credible. 😉

May 9, 2016

White Hat Hacker Jailed – Screen Capturing Your Crime

Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 3:32 pm

White Hat Researcher Jailed for Exposing SQLi Flaws by Phil Muncaster.

The headline misleading and the lead paragraph makes the same mistake:

A cybersecurity researcher who exposed vulnerabilities in a Florida elections website was last week arrested and charged on three third-degree felony counts.

It isn’t until later that you read:


“Dave obviously found a serious risk but rather than just stopping there and reporting it, he pointed a tool at it that sucked out a volume of data,” he explained in a blog post. “That data included credentials stored in plain text (another massive oversight on their behalf) which he then used to log onto the website and browse around private resources (or at least resources which were meant to be private).”

Watch the video that includes a screen capture not only of the attack, but of Dave Levin downloading files from the breached server.

All most people will read is “White Hat Hacker Jailed,” which is a severe disservice to the security community generally.

A more accurate headline would read:

White Hat Hacker Jailed For Screen Capturing His Crime

When you find a vulnerability you can:

  1. Report it, or
  2. Exploit it.

What is ill-advised is to screen capture yourself exploiting a vulnerability and then publishing it.

It’s true that corrupt politics are at play here but what other kind did you think existed?

No one, especially incompetent leadership, enjoys being embarrassed. Incompetent political leadership is often in a position to retaliate against those who embarrass it. Just a word to the wise.

PS: If you are going to commit a cyber-crime, best thinking is to NOT record it.

May 5, 2016

World Password Day (May 5th)

Filed under: Cybersecurity,Security — Patrick Durusau @ 10:13 am

World Password Day

Yeah, there is a website for “World Password Day.”

What Brian Barrett over at Wired refers to as a “made-up holiday.” 7 Password Experts on How to Lock Down Your Online Security.

I won’t tell Brian that all holidays are “made-up” if you don’t. Promise.

Read Brian’s post anyway because he does report seven tips that will make your password stronger than the average password.

Notice I did not say your password will be secure, just more secure than passwords on average.

Use this as a reminder to check your passwords.

I first saw this in Excellent advice for generating and maintaining your passwords by Cory Doctorow.

May 3, 2016

Command Line Profiles (What’s Yours?)

Filed under: Cybersecurity,Latent Semantic Analysis,Security — Patrick Durusau @ 4:57 pm

Employing Latent Semantic Analysis to Detect Malicious Command Line Behavior by Jonathan Woodbridge.

From the post:

Detecting anomalous behavior remains one of security’s most impactful data science challenges. Most approaches rely on signature-based techniques, which are reactionary in nature and fail to predict new patterns of malicious behavior and modern adversarial techniques. Instead, as a key component of research in Intrusion Detection, I’ll focus on command line anomaly detection using a machine-learning based approach. A model based on command line history can potentially detect a range of anomalous behavior, including intruders using stolen credentials and insider threats. Command lines contain a wealth of information and serve as a valid proxy for user intent. Users have their own discrete preferences for commands, which can be modeled using a combination of unsupervised machine learning and natural language processing. I demonstrate the ability to model discrete commands, highlighting normal behavior, while also detecting outliers that may be indicative of an intrusion. This approach can help inform at scale anomaly detection without requiring extensive resources or domain expertise.

This is very cool and a must read on all sides of cybersecurity.

From the perspective of Jonathan’s post, how do you detect “malicious” command line behavior? From the perspective of a defender.

Equally useful for what profiles do you mimic in order to not be detected as engaging in “malicious” command line behavior?

For example, do you mimic the profile of the sysadmin who is in charge of backups, since their “normal” behavior will be copying files in bulk and/or running scripts that accomplish that task?

Or for that matter, how do you build up profiles and possibly modify profiles over time, by running commands in the user’s absence, to avoid detection?

Opportunity is knocking, are you going to answer the door?

I first saw this in a tweet by Kirk Borne.

May 1, 2016

How-To Document Conspiracies and Other Crimes

Filed under: Cybersecurity,IRC,Security — Patrick Durusau @ 4:10 pm

I was reading the supplemental indictment of Lauri Love (New Jersey, Crim. No. 13-712, 03/23/15) when I stumbled on:

The text of the chats is reproduced in this Superseding Indictment as it appears in the chat logs; errors in spelling and punctuation have not been corrected. [Footnote 1, page 8]

It never occurred to me:

8. The manner and means by which defendant LOVE and others sought to accomplish the conspiracy included, among other things, the following:

j. It was further part of the conspiracy that defendant LOVE and other Co-Conspirators would communicate about their hacking activities in secure IRC channels. The Co-Conspirators would use more than one screen name (“nic” or “nicks”) and would often change names to further conceal their identities. For example, in an IRC communication on or about January 24, 2013, LOVE, using the online moniker “route,” discussed his efforts to conceal his identity and hacking activities, and to avoid detection: (emphasis added)

That’s the hack documentation solution isn’t it? Using “…secure IRC channels!”

In addition to using “…secure IRC channels” to engage in and further the conspiracy, those channels captured evidence of:

  1. Naming victims
  2. Discussing vulnerabilities of specific victims
  3. Discussing active hacks (tying dates to acts)
  4. Discussing results of hacks

I haven’t seen the full chat log (leakers anywhere with a full copy?) but a chat log with dates, victims, results, exploits used, etc. can document what would otherwise have to be inferred from forensics on the targeted systems.

There may be other logging IRC chat servers but I know that InspIRCd offers USERINPUT - USEROUTPUT, which use a lot of disk space.

IRC clients too offer the ability to capture logs of chats on any channel. Recording an IRC channel on Linux/Ubuntu Specifics vary from client to client so check your documentation.

Even if the neither the IRC server nor you are capturing a chat log, anyone else on the channel may be capturing the chat. If you forgot to capture, ask another member of the chat for their log.

The more detailed your chat, the easier it will be to match up your activities with such forensics as exist on the targeted systems and evidence on one or more of the computers used to carry out the hacks.

Saying IRC channels are “secure” is a mistake of fact. They are “secure” in the sense that if you don’t have a network connection, you can’t join the chat. See: How to Setup a Secure Private IRC Channel for more mis-use of the terms “secure,” “private.”

OnionIRC appeared long after the indictments of Lauri Love.

https://www.youtube.com/watch?v=YrnGQ8FMGHA

It isn’t possible to know if an OnionIRC server on the Dark Web is logging your IRC chats or not. For example, one docker container for running an IRC server as a Tor hidden service, explicitly calls out that logging is disabled by default but that can be changed.

If you want documentation of your conspiracy and other crimes while using IRC, if you use an OnionIRC server, on the Dark Web or not, be sure to capture your chat log.

Remember a detailed log for a secure IRC channel can be invaluable for documenting conspiracies, other crimes and matching up with other evidence.

Why would you want to document conspiracies and other crimes? Well, bragging rights in the prison yard, priority in terms of the first hack of X or use of Y for a hack, autobiography, CV in some cases. There are downsides to documenting conspiracies and other crimes but I am sure those will occur to you without any encouragement from me.

PS:

I’m assuming in this post you have accepted the risk of communicating with unknown others over IRC. Remembering the saying: “On the Internet, no one knows you are a dog (or police or “intelligence agency).”

April 28, 2016

U.S. Government Surveillance Breeds Meekness, Fear and Self-Censorship [Old News]

Filed under: Cybersecurity,Government,Privacy,Security — Patrick Durusau @ 7:45 pm

New Study Shows Mass Surveillance Breeds Meekness, Fear and Self-Censorship by Glenn Greenwald.

From the post:

A newly published study from Oxford’s Jon Penney provides empirical evidence for a key argument long made by privacy advocates: that the mere existence of a surveillance state breeds fear and conformity and stifles free expression. Reporting on the study, the Washington Post this morning described this phenomenon: “If we think that authorities are watching our online actions, we might stop visiting certain websites or not say certain things just to avoid seeming suspicious.”

The new study documents how, in the wake of the 2013 Snowden revelations (of which 87% of Americans were aware), there was “a 20 percent decline in page views on Wikipedia articles related to terrorism, including those that mentioned ‘al-Qaeda,’ “car bomb’ or ‘Taliban.’” People were afraid to read articles about those topics because of fear that doing so would bring them under a cloud of suspicion. The dangers of that dynamic were expressed well by Penney: “If people are spooked or deterred from learning about important policy matters like terrorism and national security, this is a real threat to proper democratic debate.”

As the Post explains, several other studies have also demonstrated how mass surveillance crushes free expression and free thought. A 2015 study examined Google search data and demonstrated that, post-Snowden, “users were less likely to search using search terms that they believed might get them in trouble with the US government” and that these “results suggest that there is a chilling effect on search behavior from government surveillance on the Internet.”

While I applaud Greenwald and others who are trying to expose the systematic dismantling of civil liberties in the United States, at least as enjoyed by the privileged, the breeding of meekness, fear and self-censorship is hardly new.

Meekness, fear and self-censorship are especially not new to the non-privileged.

Civil Rights:

Many young activists of the 1960s saw their efforts as a new departure and themselves as a unique generation, not as actors with much to learn from an earlier, labor-infused civil rights tradition. Persecution, censorship, and self-censorship reinforced that generational divide by sidelining independent black radicals, thus whitening the memory and historiography of the Left and leaving later generations with an understanding of black politics that dichotomizes nationalism and integrationism.

The Long Civil Rights Movement and the Political Uses of the Past by Jacquelyn Dowd Hall, at page 1253.

Communism:

Those who might object to a policy that is being defended on the grounds that it is protecting threats to the American community may remain silent rather than risk isolation. Arguably, this was the greatest long-term consequence of McCartyism. No politician thereafter could be seen to be soft on Communism, so that America could slide, almost by consensus, into a war against Vietnamese communists without rigorous criticism of successive administrations’ policies ever being mounted. Self-censoring of political and social debate among politicians and others can act to counter the positive effects of the country’s legal rights of expression.

Political Conflict in American by Alan Ware, pages 63-64.

The breeding of meekness, fear and self-censorship has long been a tradition in the United States. A tradition far older than the Internet.

A tradition that was enforced by fear of loss of employment, social isolation, loss of business.

You may recall in Driving Miss Daisy when her son (Boolie) worries about not getting invited to business meetings if he openly support Dr. Martin Luther King. You may mock Boolie now but that was a day to day reality. Still is, most places.

How to respond?

Supporting Wikileaks, Greenwald and other journalists is a start towards resisting surveillance, but don’t take it as a given that journalists will be able to preserve free expression for all of us.

As a matter of fact, journalists have been shown to be as reticent as the non-privileged:


Even the New York Times, the most aggressive news organization throughout the year of investigations, proved receptive to government pleas for secrecy. The Times refused to publicize President Ford’s unintentional disclosure of assassination plots. It joined many other papers in suppressing the Glomar Explorer story and led the editorial attacks on the Pike committee and on Schorr. The real question, as Tom Wicker wrote in 1978, is not “whether the press had lacked aggressiveness in challenging the national-security mystique, but why?” Why, indeed, did most journalists decide to defer to the administration instead of pursuing sensational stories?

Challenging the Secret Government by Kathryn S. Olmsted, at page 183.

You may have noticed the lack of national press organs in the United States challenging the largely fictional “war on terrorism.” There is the odd piece, You’re more likely to be fatally crushed by furniture than killed by a terrorist by Andrew Shaver, but those are easily missed in the maelstrom of unquestioning coverage of any government press release on terrorism.

My suggestion? Don’t be meek, fearful or self-censor. Easier said than done but every instance of meekness, fearfulness or self-censorship, is another step towards the docile population desired by governments and others.

Let’s disappoint them together.

April 27, 2016

Hacking Book Sale! To Support the Electronic Frontier Foundation

Filed under: Books,Cybersecurity,Electronic Frontier Foundation,Free Speech,Security — Patrick Durusau @ 4:38 pm

Humble Books Bundle: Hacking

No Starch Press has teamed up with Humble Bundle to raise money for the Electronic Frontier Foundation (EFF)!

$366 worth of No Starch hacking books on a pay what you want basis!

Charitable opportunities don’t get any better than this!

As I type this post, sales of these bundles rolled over 6,200 sales!

To help me participate in this sale, consider a donation.

Thanks!

April 25, 2016

Beautiful People Love MongoDB (But Not Brits)

Filed under: Cybersecurity,Security — Patrick Durusau @ 3:54 pm

BeautifulPeople.com Leaks Very Private Data of 1.1 Million ‘Elite’ Daters — And It’s All For Sale by Thomas Fox-Brewster.

From the post:

Sexual preference. Relationship status. Income. Address. These are just some details applicants for the controversial dating site BeautifulPeople.com are asked to supply before their physical appeal is judged by the existing user base, who vote on who is allowed in to the “elite” club based on looks alone. All of this, of course, is supposed to remain confidential. But much of that supposedly-private information is now public, thanks to the leak of a database containing sensitive data of 1.1 million BeautifulPeople.com users. The leak, according to one researcher, also included 15 million private messages between users. Another said the data is now being sold by traders lurking in the murky corners of the web.

News of the breach was passed to FORBES initially in December 2015 by researcher Chris Vickery. At the time, BeautifulPeople.com said the compromised data came from a test server, which was quickly locked up. It did not appear to be a serious incident.

But the information – which now appears to be real user data despite being hosted on a non-production server – was taken by one or more less-than-scrupulous individuals before the lockdown, making it out into the dirty world of data trading this year.

“We’re looking at in excess of 100 individual data attributes per person,” Hunt told FORBES. “Everything you’d expect from a site of this nature is in there.”

Vickery said the database he’d obtained contained 15 million messages between users. One exchange shown to FORBES involved users asking for prurient pictures of one another. A separate message read: “I didn’t even think to look for a better photo because the brits, on average, are some ugly motherf***ers anyway.” This would appear to chime with BeautifulPeople.com’s own “research”.

Don’t be in the act of drinking any hot or cold beverages when you visit “BeautifulPeople.com’s own “research”.” You may hurt yourself or ruin a keyboard. Fair warning.

The relative inaccessibility of these hacked data sets prevents leaks from acting as incentives for online services to improve their data security.

Imagine Forbes running data market pricing for “beautiful people,” living in Stockholm, for example. A very large number of people would imagine themselves to be in that set, which would set the price of that sub-set accordingly.

Moreover, it would be harder for BeautifulPeople.com to recruit new members, who are aware of the company’s lack security practices.

Thomas says that the leak was from a non-production MongoDB server.

That’s one of those databases that installs with no password for root and no obvious (in the manual) way to set it. I say “not obvious,” take a look at page 396 of the MongoDB Reference Manual, Release 3.2.5, April 25, 2016, where you will find:

The localhost exception allows you to enable access control and then create the first user in the system. With the localhost exception, after you enable access control, connect to the localhost interface and create the first user in the admin database. The first user must have privileges to create other users, such as a user with the userAdmin (page 488) or userAdminAnyDatabase (page 493) role.

Changed in version 3.0: The localhost exception changed so that these connections only have access to create the first user on the admin database. In previous versions, connections that gained access using the localhost exception had unrestricted access to the MongoDB instance.

The localhost exception applies only when there are no users created in the MongoDB instance.

First mention of password in the manual.

Should you encounter a MongoDB instance in the wild, 3.0 or earlier….

Anonymity and Privacy – Lesson 1

Filed under: Cybersecurity,Privacy,Security — Patrick Durusau @ 3:10 pm

“Welcome to ‘How to Triforce’ advanced”

Transcript of the first OnionIRC class on anonymity and privacy.

From the introduction:

Welcome to the first of (hopefully) many lessons to come here on the OnionIRC, coming to you live from The Onion Routing network! This lesson is an entry-level course on Anonymity and Privacy.

Some of you may be wondering why we are doing this. What motivates us? Some users have shown concern that this network might be “ran by the feds” and other such common threads of discussion in these dark corners of the web. I assure you, our goal is to educate. And I hope you came to learn. No admin here will ask you something that would compromise your identity nor ask you to do anything illegal. We may, however, give you the tools and knowledge necessary to commit what some would consider a crime. (Shout out to all the prisons out there that need a good burning!) What you do with the knowledge you obtain here is entirely your business.

We are personally motivated to participate in this project for various reasons. Over the last five years we have seen the numbers of those aligning with Anonymous soaring, while the average users’ technical knowhow has been on the decline. The average Anonymous “member” believes that DDoS and Twitter spam equates to hacking & activism, respectively. While this course is not covering these specific topics, we think this is a beginning to a better understanding of what “hacktivism” is & how to protect yourself while subverting corrupt governments.

Okay, enough with the back story. I’m sure you are all ready to start learning.

An important but somewhat jumpy discussion of OpSec (Operational Security) occurs between time marks 0:47 and 1:04.

Despite what you read in these notes, you have a substantial advantage over the NSA or any large organization when it comes to Operational Security.

You and you alone are responsible for your OpSec.

All large organizations, including the NSA, are vulnerable through employees (current/former), contractors (current/former), oversight committees, auditors, public recruitment, etc. They all leak, some more than others.

Given the NSA’s footprint, you should have better than NSA-grade OpSec from the outset. If you don’t, you need a safer hobby. Try binging on X-Files reruns.

The chat is informative, sometimes entertaining, and tosses out a number of useful tidbits but you will get more details out of the notes.

Enjoy!

April 22, 2016

Weekend Hacking Practice – WIN-T

Filed under: Cybersecurity,Security — Patrick Durusau @ 7:25 pm

win-t

U.S. Army Finds Its New Communications Network Is Vulnerable to Hackers by Aaron Pressman.

From the post:

The U.S. Army’s new $12 billion mobile communications system remains vulnerable to hackers, according to a recent assessment by outside security experts, prompting a series of further improvements.

Already in use in Iraq and Afghanistan, the Warfighter Information Network-Tactical Increment 2, or WIN-T, system is supposed to allow for protected voice, video, and data communications by troops on the move. In June, General Dynamics won a $219 million order for communications systems to go in more than 300 vehicles.

Government overseers have regularly criticized cyber security features of WIN-T in reports over the past few years, prompting an outside review by Johns Hopkins University and the Army Research Laboratory. The public reports do not disclose specific vulnerabilities, however.

Do you appreciate the use of “finds” rather than “admits to” flaws in their $12 billion mobile communication center?

Public reports not “…disclos[ing] specific vulnerabilities” was very likely in the interest of saving space in the reports.

Or as noted in the DOE&T report on the WIN-T:

WIN-T Increment 2 is not survivable. Although improved, WIN-T Increment 2 continues to demonstrate cybersecurity vulnerabilities. This is a complex challenge for the Army since WIN-T is dependent upon the cyber defense capabilities of all mission command systems connected to the network. (Emphasis added.) at page WIN-T 156.

Listing all the vulnerabilities of the WIN-T Increment 2 or Increment 3, would be equivalent to detailing all the vulnerabilities of the Sony network.

Interesting in a cataloging sort of way but only just.

Besides, its more sporting to challenge hackers to find vulnerabilities in WIN-T Increment 2 or Increment 3 without a detailed listing.

PS: Talk about an attack surface: General Dynamics Receives $219 Million for U.S. Army’s WIN-T Increment 2 Systems

General Dynamics Mission Systems and more than 500 suppliers nationwide will continue to work together to build and deliver WIN-T Increment 2 systems, the Army’s “Digital Guardian Angel.”

That doesn’t include all the insecure systems that tie into the WIN-T.

Maybe they will change the acronym to RDS – Rolling Digital Sieve?

Cybersecurity Via Litigation

Filed under: Cybersecurity,Law,Security,Software — Patrick Durusau @ 3:47 pm

Ex-Hacker: If You Get Hacked, Sue Somebody by Frank Konkel.

From the post:

Jeff Moss, the hacker formerly known as Dark Tangent and founder of Black Hat and DEFCON computer security conferences, has a message for the Beltway tech community: If you get owned, sue somebody.

Sue the hackers, the botnet operators that affect your business or the company that developed insecure software that let attackers in, Moss said. The days of software companies having built-in legal “liability protections” are about to come to an end, he argued.

“When the Internet-connected toaster burns down the kitchen, someone is going to get sued,” said Moss, speaking Wednesday at the QTS Information Security and Compliance Forum in Washington, D.C. “The software industry is the only industry with liability protection. Nobody else has liability protection for some weird reason. Do you think that is going to last forever?”

Some customer and their law firm will be the first ones to tag a major software company for damages.

Will that be your company/lawyers?

The only way to dispel the aura invulnerability from around software companies is by repeated assaults by people damaged by their negligence.

Tort (think liability for civil damages) law has a long and complex history. A history that would not have developed had injured people been content to simply be injured with no compensation.

On torts in general, see: Elements of Torts in the USA by Robert B. Standler.

I tried to find an online casebook that had edited versions of some of the more amusing cases from tort history but to no avail.

You would be very surprised at what conduct has been shielded from legal liability over the years. But times do change and sometimes in favor of the injured party.

If you want to donate a used tort casebook, I’ll post examples of changing liability as encouragement for suits against software vendors. Stripped of all the legalese, facts of cases can be quite amusing/outraging.

April 20, 2016

Adobe vs. Department of HomeLand Security (DHS) – Who To Trust?

Filed under: Cybersecurity,Security — Patrick Durusau @ 7:56 pm

Windows Users Warned to Dump QuickTime Pronto by David Jones.

From the post:

The U.S. Department of Homeland Security on Thursday issued a warning to remove Apple’s QuickTime for Windows. The alert came in response to Trend Micro‘s report of two security flaws in the software, which will never be patched because Apple has ended support for QuickTime for Windows.

Computers running QuickTime are open to increased risk of malicious attack or data loss, US-CERT warned, and remote attackers could take control of a victim’s computer system. US-CERT is part of DHS’ National Cybersecurity and Communications Integration Center.

“We alerted DHS because we felt the situation was broad enough that people having unpatched vulnerabilities on their system needed to be made aware,” said Christopher Budd, global threat communication manager at Trend Micro.

The a few days later, Adobe clouds that warning as reported in: Don’t be too quick to uninstall QuickTime for Windows warns Adobe by Graham Cluley.

Adobe issued this notice, pointing out that removing QuickTime for Windows may bite you if you are a Creative Cloud user.

Along with no date from removing QuickTime dependencies.

BTW, Apple has announced that the vulnerabilities that lead to these conflicting announcements, will not be fixed.

In terms of impact, I did find these statistics on website usage of QuickTime but wasn’t able to locate statistics on user installations of QuickTime on Windows boxes.

My guess is that QuickTime on Windows machines at government installations resembles a rash. Diaper rash that is.

This is story to keep in mind when planning dependencies on software or data that is not under your control.

PS: I would uninstall it but then I don’t run Flash either. No one is completely safe but sleeping outside, naked, is just inviting trouble. That’s the analog equivalent of running either QuickTime or Flash.

April 18, 2016

Hacking Target for Week of April 18 – 25, 2016

Filed under: Cybersecurity,Security — Patrick Durusau @ 8:08 pm

3.2 Million Machines Found Vulnerable to Ransomware Campaign by David Bisson

From the post:

Researchers have found 3.2 million machines that are vulnerable of being targeted in a ransomware campaign.

According to a post published by the Cisco Talos Security Intelligence and Research Group, attackers can leverage vulnerabilities found in WildFly, an application server that also goes by the name JBoss, as an initial point of compromise to target upwards of 3.2 million machines.

Once they have established a foothold, bad actors can download malware onto the compromised machines and move laterally across the network to infect other computers.

Such was the case in a recent Samsam ransomware campaign, where attackers used a tool known as “JexBoss” to exploit JBoss application servers.

Further investigation by the Cisco Talos research team has uncovered 2,100 JBoss backdoors that have already been installed on 1,600 unique IP addresses.

There are far more than 3.2 million systems vulnerable to ransomware campaigns but here you have the advantage of targeting information and good odds of finding one of those targets.

Not that I advocate the use of ransomware but increases in cyberattacks drives the need for better information management of hacking information for “white,” “gray,” and “black” hats alike.

Or as they say:

It’s an ill wind indeed that doesn’t blow anyone good.

Ask yourself how much prose do you have to sift every day, day in, day out, just to remain partially current on security issues?

No, I’m not interested in fostering yet another meta-collection, rather a view into all existing collections, meta or not. Build upon what already exists and is useful.

Interested?

PS: I’m not concerned with your hat color. That’s between you and your local law enforcement officials.

April 17, 2016

HackBack! A DIY Guide [Attn: Everybody A Programmer Advocates]

Filed under: Cybersecurity,Security — Patrick Durusau @ 3:21 pm

HackBack! A DIY Guide

hack-back

From the introduction:

You’ll notice the change in language since the last edition [1]. The English-speaking world already has tons of books, talks, guides, and info about hacking. In that world, there’s plenty of hackers better than me, but they misuse their talents working for “defense” contractors, for intelligence agencies, to protect banks and corporations, and to defend the status quo. Hacker culture was born in the US as a counterculture, but that origin only remains in its aesthetics – the rest has been assimilated. At least they can wear a t-shirt, dye their hair blue, use their hacker names, and feel like rebels while they work for the Man.

You used to have to sneak into offices to leak documents [2]. You used to need a gun to rob a bank. Now you can do both from bed with a laptop in hand [3][4]. Like the CNT said after the Gamma Group hack: “Let’s take a step forward with new forms of struggle” [5]. Hacking is a powerful tool, let’s learn and fight!

[1] http://pastebin.com/raw.php?i=cRYvK4jb
[2] https://en.wikipedia.org/wiki/Citizens%27_Commission_to_ Investigate_the_FBI
[3] http://www.aljazeera.com/news/2015/09/algerian-hacker-hero-hoodlum-150921083914167.html
[4] https://securelist.com/files/2015/02/Carbanak_APT_eng.pdf
[5] http://madrid.cnt.es/noticia/consideraciones-sobre-el-ataque-informatico-a-gamma-group

I thought the shout out to hard working Russian hackers was a nice touch!

If you are serious about your enterprise security, task one of your better inforsec people to use HackBack! A DIY Guide as a starting point against your own network.

Think of it as a realistic test of your network security.

For “everybody a programmer” advocates, consider setting up networks booting off read-only media and specific combinations of vulnerable software to encourage practice hacking of those systems.

Think of hacking “practice” systems as validation of hacking skills. Not to mention being great training for future infosec types.

PS: Check computer surplus if you want to duplicate some current government IT systems.

I first saw this in FinFisher’s Account of How He Broke Into Hacking Team Servers by Catalin Cimpanu.

« Newer PostsOlder Posts »

Powered by WordPress