Archive for the ‘Dark Web’ Category

Weaponry on the Dark Web – Read The Fine Print

Wednesday, July 26th, 2017

The NextGov headline screaming: 3D-Printed Gun Designs Are Selling For $12 On The Dark Web is followed by this pic:

But the fine print in the caption reads:

The additive-manufactured RAMBO system includes an NSRDEC-designed standalone kit with printed adjustable buttstock, mounts, grips and other modifications—modifications made possible by the quick turnaround time afforded by 3D printing. // US Army

So….

  1. This is NOT a printable gun from the Dark Web
  2. Printable parts ARE buttstock, mounts, grips, not the gun itself

Just so you know, the RAND paper doesn’t include this image. 😉

In fact, Behind the curtain: The illicit trade of firearms, explosives and ammunition on the dark web by Giacomo Persi Paoli, Judith Aldridge, Nathan Ryan, Richard Warnes, concede trading of weapons on the Dark Web is quite small beside non-Dark Web trafficking.

Missing in the discussion of 3-D weapons plans is a comparison of the danger they pose relative to other technologies.

The Cal Fire map leaves no doubt that $12 or less in gasoline and matches can produce far more damage than any 3-D printed weapon. Without the need for a 3-D printer.

Yes?

All weapons pose some danger. Decisions makers need to know the relative dangers of weapons vis-a-vis each other.

A RAND report on the comparative danger of weapons would be far more useful than reports on weapons and their sources in isolation.

Dark Web Data Dumps

Sunday, December 11th, 2016

Dark Web Data Dumpsby Sarah Jamie Lewis.

From the webpage:

A collection of structured data obtained from scraping the dark web.

Why

Researchers need more data about the dark web.

The best resource we have right now are the (Black Market Archives)[https://www.gwern.net/Black-market%20archives], scrapping of various marketplaces scrapped by Gwern et al in 2013.

Much has changed since 2013, and complete web dumps, while useful for some research tasks, frustrate and complicate others.

Further, governments & corporations are already building out such data in private & for profit.

Citing
This Resource

Sarah Jamie Lewis. Dark Web Data Dumps, 2016, 10 Dec 2016. Web. [access date] https://polecat.mascherari.press/onionscan/dark-web-data-dumps

Valhalla Marketplace Listings October 2016

Sarah Jamie Lewis. Dark Web Data Dumps, 2016, Octber 2016. Web. [access date] https://polecat.mascherari.press/onionscan/dark-web-data-dumps/blob/master/valhalla-marketplace-listings-2016-10.csv

Supporting

If you would like to support this, and other dark web research, please become a patron.

Valhalla Marketplace Listings runs 1.3 MB and 16511 lines.

Sans the Rolex watch ads, makes great New Year’s party material. 😉

Running a Tor Exit Node for fun and e-mails

Sunday, September 4th, 2016

Running a Tor Exit Node for fun and e-mails by Antonios A. Chariton.

From the post:


To understand the logistics behind running a Tor Exit Node, I will tell you how I got to run my Tor Exit Node for over 8 months. Hopefully, during the process, some of your questions will be answered, and you’ll also learn some new things. Please note that this is my personal experience and I cannot guarantee it will be the same for you. Also, I must state that I have run other exit nodes in the past, as well as multiple non-exit relays and bridges.
…

A great first person account on running a Tor Exit Node.

Some stats after 8 months:

  • It has been running for almost 8 months
  • It costs 4,90 EUR / month. In comparison, the same server in AWS would cost $1,122, or 992€ as of today
  • The total cost to date is 40€. In comparison, the same server in AWS would cost about 8,000€.
  • It is pushing up to 50 Mb/s, every second
  • It relayed over 70 TB of Tor traffic
  • It generated 2,729 Abuse E-Mails
  • It is only blocking port 25, and this to prevent spam
  • It helped hundreds or thousands of people to reach an uncensored Internet
  • It helped even more people browse the Internet anonymously and with privacy

If your not quite up to running an exit node, consider running a Tor relay node: Add Tor Nodes For 2 White Chocolate Mochas (Venti) Per Month.

Considering the bandwidth used by governments for immoral purposes, the observation:


Finally, just like with everything else, we have malicious users. Not necessarily highly skilled criminals, but people in general who (ab)use the anonymity that Tor provides to commit things they otherwise wouldn’t.

doesn’t trouble me.

As a general rule, highly skilled or not, criminals don’t carry out air strikes against hospitals and such.

Dark Web OSINT With Python Part Three: Visualization

Thursday, September 1st, 2016

Dark Web OSINT With Python Part Three: Visualization by Justin.

From the post:

Welcome back! In this series of blog posts we are wrapping the awesome OnionScan tool and then analyzing the data that falls out of it. If you haven’t read parts one and two in this series then you should go do that first. In this post we are going to analyze our data in a new light by visualizing how hidden services are linked together as well as how hidden services are linked to clearnet sites.

One of the awesome things that OnionScan does is look for links between hidden services and clearnet sites and makes these links available to us in the JSON output. Additionally it looks for IP address leaks or references to IP addresses that could be used for deanonymization.

We are going to extract these connections and create visualizations that will assist us in looking at interesting connections, popular hidden services with a high number of links and along the way learn some Python and how to use Gephi, a visualization tool. Let’s get started!

Jason tops off this great series on OnionScan by teaching the rudiments of using Gephi to visualize and explore the resulting data.

Can you map yourself from the Dark Web to visible site?

If so, you aren’t hidden well enough.

Dark Web OSINT with Python Part Two: … [Prizes For Unmasking Government Sites?]

Wednesday, August 10th, 2016

Dark Web OSINT with Python Part Two: SSH Keys and Shodan by Justin.

From the post:

Welcome back good Python soldiers. In Part One of this series we created a wrapper around OnionScan, a fantastic tool created by Sarah Jamie Lewis (@sarajamielewis). If you haven’t read Part One then go do so now. Now that you have a bunch of data (or you downloaded it from here) we want to do some analysis and further intelligence gathering with it. Here are a few objectives we are going to cover in the rest of the series.

  1. Attempt to discover clearnet servers that share SSH fingerprints with hidden services, using Shodan. As part of this we will also analyze whether the same SSH key is shared amongst hidden services.
  2. Map out connections between hidden services, clearnet sites and any IP address leaks.
  3. Discover clusters of sites that are similar based on their index pages, this can help find knockoffs or clones of “legitimate” sites. We’ll use a machine learning library called scikit-learn to achieve this.

The scripts that were created for this series are quick little one-offs, so there is some shared code between each script. Feel free to tighten this up into a function or a module you can import. The goal is to give you little chunks of code that will teach you some basics on how to begin analyzing some of the data and more importantly to give you some ideas on how you can use it for your own purposes.

In this post we are going to look at how to connect hidden services by their SSH public key fingerprints, as well as how to expand our intelligence gathering using Shodan. Let’s get started!

Expand your Dark Web OSINT intell skills!

Being mindful that if you can discover your Dark Web site, so can others.

Anyone awarding Black Hat conference registrations for unmasking government sites on the Dark Web?

The Privileged Cry: Boo, Hoo, Hoo Over Release of OnionScan Data

Saturday, July 30th, 2016

It hasn’t taken long for the privileged to cry “boo, hoo, hoo,” over Justin Seitz’s releasing the results of using OnionScan on over 8,000 Dark Web sites. You can find Justin’s dump here.

Joseph Cox writes in: Hacker Mass-Scans Dark Web Sites for Vulnerabilities, Dumps Results:

…Sarah Jamie Lewis, the creator of OnionScan, warns that publishing the full dataset like this may lead to some Tor hidden services being unmasked. In her own reports, Lewis has not pointed to specific sites or released the detailed results publicly, and instead only provided summaries of what she found.

“If more people begin publishing these results then I imagine there are a whole range of deanonymization vectors that come from monitoring page changes over time. Part of the reason I destroy OnionScan results once I’m done with them is because people deserve a chance to fix the issue and move on—especially when it comes to deanonymization vectors,” Lewis told Motherboard in an email, and added that she has, when legally able to, contacted some sites to help them fix issues quietly.

Sarah Jamie Lewis and others who seek to keep vulnerability data secret are making two assumptions:

  1. They should have exclusive access to data.
  2. Widespread access to data diminishes their power and privilege.

I agree only with #2 and it is the reason I support public and widespread distribution of data, all data.

Widespread access to data means it is your choices and abilities that determine its uses and not privilege of access.

BTW, Justin has the better of the exchange:


Seitz, meanwhile, thinks his script could be a useful tool to many people. “Too often we set the bar so high for the general practitioner (think journalists, detectives, data geeks) to do some of this larger scale data work that people just can’t get into it in a reasonable way. I wanted to give people a starting point,” he said.

“I am a technologist, so it’s the technology and resulting data that interest me, not the moral pros and cons of data dumping, anonymity, etc. I leave that to others, and it is a grey area that as an offensive security guy I am no stranger to,” he continued.

The question is: Do you want privileged access to data for Sarah Jamie Lewis and a few others or do you think everyone should have equal access to data?

I know my answer.

What’s yours?

Dark Web OSINT With Python and OnionScan: Part One

Saturday, July 30th, 2016

Dark Web OSINT With Python and OnionScan: Part One by Justin.

When you tire of what passes for political discussion on Twitter and/or Facebook this weekend, why not try your hand at something useful?

Like looking for data leaks on the Dark Web?

You could, in theory at least, notify the sites of their data leaks. 😉

One of the aspects of announced leaks that never ceases to amaze me are reports that read:

Well, we pawned the (some string of letters) database and then notified them of the issue.

Before getting a copy of the entire database? What’s the point?

All you have accomplished is making another breach more difficult and demonstrating your ability to breach a system where the root password was most likely “god.”

Anyway, Justin gets you started on seeking data leaks on the Dark Web saying:

You may have heard of this awesome tool called OnionScan that is used to scan hidden services in the dark web looking for potential data leaks. Recently the project released some cool visualizations and a high level description of what their scanning results looked like. What they didn’t provide is how to actually go about scanning as much of the dark web as possible, and then how to produce those very cool visualizations that they show.

At a high level we need to do the following:

  1. Setup a server somewhere to host our scanner 24/7 because it takes some time to do the scanning work.
  2. Get TOR running on the server.
  3. Get OnionScan setup.
  4. Write some Python to handle the scanning and some of the other data management to deal with the scan results.
  5. Write some more Python to make some cool graphs. (Part Two of the series)

Let’s get started!

Very much looking forward to Part 2!

Enjoy!