Archive for the ‘Network Security’ Category

NGREP – Julia Evans

Sunday, July 31st, 2016

Julia Evans demonstrates how to get around the limits of Twitter and introduces you to a “starter network spy tool.”

ngrep-Julia-Evans-460

A demonstration of her writing skills as well!

Ngrep at sourceforge.

Installing on Ubuntu 14.04:

sudo apt-get update
sudo apt-get install ngrep

I’m a follower of Julia’s but even so, I checked the man page for ngrep before running the example.

The command:

sudo ngrep -d any metafilter is interpreted:

sudo – runs ngrep as superuser (hence my caution)

ngrep – network grep

-d any – ngrep listens to “any” interface *

metafilter – match expression, packets that match are dumped.

* The “any” value following -d was the hardest value to track down. The man page for ngrep describes the -d switch this way:

-d dev

By default ngrep will select a default interface to listen on. Use this option to force ngrep to listen on interface dev.

Well, that’s less than helpful. 😉

Until you discover on the tcpdump man page:

–interface=interface
Listen on interface. If unspecified, tcpdump searches the system interface list for the lowest numbered, configured up interface (excluding loopback), which may turn out to be, for example, “eth0”.
On Linux systems with 2.2 or later kernels, an interface argument of “any” can be used to capture packets from all interfaces. Note that captures on the “any” device will not be done in promiscuous mode. (bold highlight added)

If you are running a Linux system with a 2.2 or later kernel, you can use the “any” argument to the interface -d switch of ngrep.

Understanding the entire command, I then felt safe running it as root. 😉 Not that I expected a bad outcome but I learned something in the process of researching the command.

Be aware that ngrep is a plethora of switches, options, bpf filters (Berkeley packet filters) and the like. The man page runs eight pages of, well, man page type material.

Enjoy!

Named Data Networking – Privacy Or Property Protection?

Friday, September 5th, 2014

The Named Data Networking Consortium launched on September 3, 2014! (Important changes to be baked into Internet infrastructure. Read carefully.)

Huh? 😉

In case you haven’t heard:

Named Data Networking (NDN) is a Future Internet Architecture inspired by years of empirical research into network usage and a growing awareness of persistently unsolved problems of the current Internet (IP) architecture. Its central premise is that the Internet is primarily used as an information distribution network, a use that is not a good match for IP, and that the future Internet’s “thin waist” should be based on named data rather than numerically addressed hosts.

This project continues research on NDN started in 2010 under NSF’s FIA program. It applies the project team’s increasingly sophisticated understanding of NDN’s opportunities and challenges to two national priorities–Health IT and Cyberphysical Systems–to further the evolution of the architecture in the experimental, application-driven manner that proved successful in the first three years. In particular, our research agenda is organized to translate important results in architecture and security into library code that guides development for these environments and other key applications toward native NDN designs. It simultaneously continues fundamental research into the challenges of global scalability and broad opportunities for architectural innovation opened up by “simply” routing and forwarding data based on names.

Our research agenda includes: (1) Application design, exploring naming and application design patterns, support for rendezvous, discovery and bootstrapping, the role and design of in-network storage, and use of new data synchronization primitives; (2) Security and trustworthiness, providing basic building blocks of key management, trust management, and encryption-based access control for the new network, as well as anticipating and mitigating future security challenges faced in broad deployment; (3) Routing and forwarding strategy, developing and evaluating path-vector, link-state, and hyperbolic options for inter-domain routing, creating overall approaches to routing security and trust, as well as designing flexible forwarding and mobility support; (4) Scalable forwarding, aiming to support real-world deployment, evaluation and adoption via an operational, scalable forwarding platform; (5) Library and tool development, developing reference implementations for client APIs, trust and security, and new network primitives based on the team’s fundamental results, as well as supporting internal prototype development and external community efforts; (6) Social and economic impacts, considering the specific questions faced in our network environments as well as broader questions that arise in considering a “World on NDN.”

We choose Mobile Health and Enterprise Building Automation and Management Systems as specific instances of Health IT and Cyberphysical Systems to validate the architecture as well as drive new research. Domain experts for the former will be the Open mHealth team, a non-profit patient-centric ecosystem for mHealth, led by Deborah Estrin (Cornell) and Ida Sim (UCSF). For the latter, our experts will be UCLA Facilities Management, operators of the second largest Siemens building monitoring system on the West Coast. To guide our research on the security dimensions of these important environments and the NDN architecture more generally, we have convened a Security Advisory Council (NDN-SAC) to complement our own security and trust effort.

Intellectual Merit

The NDN architecture builds on lessons learned from the success of the IP architecture, preserving principles of the thin waist, hierarchical names, and the end-to-end principle. The design reflects a recognition of the major shift in the applications communication model: from the “where” (i.e., the host/location) to the “what” (i.e., the content). Architecting a communications infrastructure around this shift can radically simplify application designs to allow applications to communicate directly using the name of the content they desire and leave to the network to figure out how and where to retrieve it. NDN also recognizes that the biggest weakness in the current Internet architecture is lack of security, and incorporates a fundamental building block to improve security by requiring that all content be cryptographically signed.

Truly an impressive effort and one that will be exciting to watch!

You may want to start with: Named Data Networking: Motivation & Details as an introduction.

One of the features of NDN is that named data can be cached and delivered by a router separate from its point of origin. Any user can request the named data and the caching router only knows that it has been requested. Or in the words of the Motivation document:

Caching named data may raise privacy concerns. Today’s IP networks offer weak privacy protection. One can find out what is in an IP packet by inspecting the header or payload, and who requested the data by checking the destination address. NDN explicitly names the data, arguably making it easier for a network monitor to see what data is being requested. One may also be able to learn what data is requested through clever probing schemes to derive what is in the cache. However NDN removes entirely the information regarding who is requesting the data. Unless directly connected to the requesting host by a point-to-point link, a router will only know that someone has requested certain data, but will not know who originated the request. Thus the NDN architecture naturally offers privacy protection at a fundamentally different level than the current IP networks.

Which sounds attractive, until you notice that the earlier quote ends saying:

and incorporates a fundamental building block to improve security by requiring that all content be cryptographically signed (emphasis added)

If I am interpreting the current NDN statements correctly, routers will not accept or transport un-cryptographically signed data packets.

Cryptographic signing of data packets, depending upon its requirements, will eliminate anonymous hosting of data. Think about that for a moment. What data might not be made public if its transmission makes its originator identifiable?

Lots of spam no doubt but also documents such as the recent Snowden leaks and other information flow embarrassing to governments.

Or to those who do not sow but who seek to harvest such as the RIAA.

NDN is in the early stages, which is the best time to raise privacy, fair use and similar issues in its design.

Big Data Security Part Two: Introduction to PacketPig

Tuesday, November 20th, 2012

Big Data Security Part Two: Introduction to PacketPig by Michael Baker.

From the post:

Packetpig is the tool behind Packetloop. In Part One of the Introduction to Packetpig I discussed the background and motivation behind the Packetpig project and problems Big Data Security Analytics can solve. In this post I want to focus on the code and teach you how to use our building blocks to start writing your own jobs.

The ‘building blocks’ are the Packetpig custom loaders that allow you to access specific information in packet captures. There are a number of them but two I will focus in this post are;

  • Packetloader() allows you to access protocol information (Layer-3 and Layer-4) from packet captures.
  • SnortLoader() inspects traffic using Snort Intrusion Detection software.

Just in case you get bored with holiday guests, you can spend some quality time looking around on the other side of your cable router. 😉

Or deciding how you would model such traffic using a topic map.

Both would be a lot of fun.

Data mining for network security and intrusion detection

Wednesday, July 18th, 2012

Data mining for network security and intrusion detection by Dzidorius Martinaitis.

One of my favourite stories about network security/intrusion was in a Netware class. The instructor related that in a security “audit,” of a not small firm, it was discovered the Novell servers were sitting in a room that everyone, including the cleaning crew, had access.

Guess they never heard of physical security or Linux boot disks.

Assuming you have taken care of the obvious security risks, topic maps might be useful in managing the results of data mining.

From the post:

In preparation for “Haxogreen” hackers summer camp which takes place in Luxembourg, I was exploring network security world. My motivation was to find out how data mining is applicable to network security and intrusion detection.

Flame virus, Stuxnet, Duqu proved that static, signature based security systems are not able to detect very advanced, government sponsored threats. Nevertheless, signature based defense systems are mainstream today – think of antivirus, intrusion detection systems. What do you do when unknown is unknown? Data mining comes to mind as the answer.

There are following areas where data mining is or can be employed: misuse/signature detection, anomaly detection, scan detection, etc.

Misuse/signature detection systems are based on supervised learning. During learning phase, labeled examples of network packets or systems calls are provided, from which algorithm can learn about the threats. This is very efficient and fast way to find know threats. Nevertheless there are some important drawbacks, namely false positives, novel attacks and complication of obtaining initial data for training of the system.

The false positives happens, when normal network flow or system calls are marked as a threat. For example, an user can fail to provide the correct password for three times in a row or start using the service which is deviation from the standard profile. Novel attack can be define as an attack not seen by the system, meaning that signature or the pattern of such attack is not learned and the system will be penetrated without the knowledge of the administrator. The latter obstacle (training dataset) can be overcome by collecting the data over time or relaying on public data, such as DARPA Intrusion Detection Data Set.

Although misuse detection can be built on your own data mining techniques, I would suggest well known product like Snort which relays on crowd-sourcing.

Taking Snort as an example, what other system data would you want to merge with data from Snort?

Or for that matter, how would you share such information (Snort+) with others?

PS: Be aware that cyber-attack/security/warfare are hot topics and therefore marketing opportunities.