Welcome!

Microsoft Cloud Authors: Pat Romanski, Lori MacVittie, Andreas Grabner, Jim Kaskade, John Basso

Related Topics: IoT User Interface, Microsoft Cloud, Cloud Security

IoT User Interface: Tutorial

Intruder Detection with tcpdump

tcpdump tool

To capture, parse, and analyze traffic tcpdump is a very powerful tool. To begin a basic capture uses the following syntax.

tcpdump -n –i <interface> -s <snaplen>

-n      tells tcpdump to not resolve IP addresses to domain names and port numbers to service names.
-I       <interface> tells tcpdump which interface to use.
-s      <snaplen> tells tcpdump how much of the packet to record. I used 1515 but 1514 is sufficient for most cases. If you don’t specify a size then it will only capture the first 68 bytes of each packet. A snaplen value of 0 which will use the required length to catch whole packets can be used except for older versions of tcpdump.

Below is an example output of a dump, although it only contains a few lines it holds much information.

12:24:51.517451  IP  10.10.253.34.2400 > 192.5.5.241.53:  54517 A? www.bluecoast.com.  (34)

12:24:51:517451                              represent the time
10.10.253.34.2400                          Source address and port
>                                                          Traffic direction
192.5.5.241.53                                 Destination address and port
54517                                                 ID number that is shared by both the DNS server 192.5.5.241 and 10.10.253.34
A?                                                        10.10.253.34 asks a question regarding the A record for www.bluecoat.com
(34)                                                     The entire packet is 34 bytes long.

More tcpdump capture options

Here are some examples of options to use when capturing data and why to use them:

-I        specify an interface; this will ensure that you are sniffing where you expect to sniff.
-n       tells tcpdump not to resolve IP addresses to domain names and port numbers to service names
-nn    don’t resolve hostnames or port names
-X      Show packet’s contents in both hex and ASCII
-XX    Include Ethernet header
-v       Increase verbose –vv –vvv more info back
-c       Only get x number of packets and stop
-s       tell tcpdump how much of the packet to record
-S       print absolute sequence numbers
-e       get Ethernet header
-q       show less protocol info
-E       Decrypt IPSEC traffic by providing an encryption key

Packet, Segment, and Datagram
TCP accepts data from a data stream, segments it into chucks, and adds a TCP header creating a TCP segment. UDP sends messages referred to as a datagram to other hosts on an Internet Protocol (IP) network without requiring prior communications to set up special transmission channels or data paths. Internet Protocol then creates its own datagram out of what it receives from TCP or UDP. If the TCP segment or UDP datagram plus IP’s headers are small enough to send in a single package on the wire then IP creates a packet. If they are too large and exceed the maximum transmission unit (MTU) of the media, IP will fragment the datagram into smaller packets suitable to the MTU. The fragmented packets are then reassembled by the destination.

Tcpdump read and write to/from a file
Tcpdump allows you to write data to a file using the –w option and to read from a file with the –r option.

$ sudo tcpdump -i wlan0 -w dumpfile001

$ sudo tcpdump -r dumpfile.pcap

Some people like to see the files as they are captured and have them saved to a file. Use the following options: tcpdump –n –I eth1 –s 1515 –l | tee output.txt
This option tells tcpdump to make its output line-buffered, while piping the output to the tee utility sends output to the screen and the output.txt simultaneously. This command will display packets on the screen while writing data to an output file output.txt it will not be in binary libpcap format. The best way to do this is run a second instance of tcpdump.

Timestamps
When tcpdump captures packets in libpcap format, it adds a timestamp entry to the record in each packet in the capture file. We can augment that data with the –tttt flag, which adds a date to the timestamp (See Figure #1).

Figure 1

You can use the –tt flag to report the number of seconds and microseconds since the UNIX epoch of 00:00:00 UTC on January 1, 1970. If you are not sure you understand the time difference and need to be absolutely sure of time use the –tt option to show seconds and microseconds since the UNIX epoch (See Figure #2).

Figure 2

Useful syntax
Being able to cut the amount of traffic down to just what you are looking for is useful. Here are some useful expressions that can be helpful in tcpdump.

Net – This will capture the traffic on a block of IPs ex 192.168.0.0/24
# tcpdump net 192.168.1.1/24
Src, dst – This will only capture packets form a source or destination.
# tcpdump src 192.168.100.234
# tcpdump dst 10.10.24.56

Host – Capture only traffic based on the IP address
# tcpdump host 10.10.253.34
Proto – Capture works for tcp, udp, and icmp
# tcpdump tcp
Port – Capture packets coming from or going to a port.
# tcpdump port 21
Port ranges – capture packets
# tcpdump port 20-25
Using expressions such as AND [&&], OR [||], & EXCEPT [!]
# tcpdump –n –I eth1 host 10.10.253.34 and host 10.10.33.10
# tcpdump –n –I eht1 src net 10.10.253.0/24 and dst net 10.10.33.0/24 or 192.5.5.241
# tcpdump –n –I eth1 src net 10.10.30.0/24 and not icmp

Searching for info on packets with tcpdump
If you want to search for information in the packet you have to know where to look. Tcpdump starts counting bytes of header information at byte 0 and the 13th byte contains the TCP flags shown in Table #1

<----byte12-----------><--------byte13----------><-----------byte14-----><------byte15------->

Talbe #1

Now looking at byte 13 and if the SYN and ACK are set then your binary value would be 00010010 which are the same as decimal 18. We can search for packets looking for this type of data inside byte 13 shown here.


# tcpdump –n –r dumpfile.lpc –c 10 ‘tcp[13] == 18’ and host 172.16.183.2

Here is a sample of what this command will return shown in Figure #3

Figure #3

When capturing data using tcpdump one way to ignore the arp traffic is to put in a filter like so.


# tcpdump –n –s 1515 –c 5 –I eth1 tcp or udp or icmp

This will catch only tcp, udp, or icmp.

If you want to find all the TCP packets with the SYN ACK flag set or other flags set take a look at Table #2 & tcpdump filter syntax shown below.


flag           Binary           Decimal
URG         00100000          32
ACK          00010000          16
PSH          00001000           8
RST          00000100           4
SYN          00000010           2
FIN            00000001           1
SYNACK  00010010         18

Table #2

Tcpdump filter syntax

Show all URGENT (URG) packets
# tcpdump ‘tcp[13] == 32’
Show all ACKNOWLEDGE (ACK) packets
# tcpdump ‘tcp[13] == 16’
Show all PUSH (PSH) packets
# tcpdump ‘tcp[13] == 8’
Show all RESET (RST) packets
# tcpdump ‘tcp[13] == 4’
Show all SYNCHRONIZE (SYN) packets
# tcpdump ‘tcp[13] ==2’
Show all FINISH (FIN) packets
# tcpdump ‘tcp[13] == 1’
Show all SYNCHRONIZE/ACKNOWLEDGE (SYNACK) packets
# tcpdump ‘tcp[13] == 18’

Using tcpdump in Incident Response

When doing analysis on network traffic using a tool like tcpdump is critical. Below are some examples of using tcpdump to view a couple of different dump files to learn more about network problems or possible attack scenarios. The first is a binary dump file of a snort log and we are given the following information. The IP address of the Linux system is 192.168.100.45 and an attacker got in using a WU-FTPD vulnerability and deployed a backdoor. What can we find out about how the attack happened and what he did?

First we will take a look at the file

# tcpdump –xX –r snort001.log
The log appears long at this point you may want to run the file in snort
# snort –r snort001.log –A full –c /etc/snort/snort.conf
This will give you some info like total packets processed, protocol breakdown, any alerts, etc. See Figure #4 & #5

Figure #4                                                                               Figure #5

Next extract the full snort log file for analysis

# tcpdump –nxX –s 1515 –r snort001.log > tcpdump-full.dat

This will give us a readable file to parse through. After looking through it we find ip-proto-11, which is Network Voice Protocol (NVP) traffic. Now we will search through the file looking for ip-proto-11.


# tcpdump –r snort001.log –w NVP-traffic.log proto 11
This command will read the snort001.log file and look for ‘log proto 11’ and writes the contents to the file NVP-traffic.log. Next we need to be able to view the file because it is a binary file.

# tcpdump –nxX –s 1515 –r NVP-traffic.log > nvp-traffic_log.dat
This will be a file of both hex and ASCII, which is nice but we just want the IP address. Try this.

# tcpdump –r NVP-traffic.log > nvp-traffic_log01.dat
This will give us a list of IP address that were communicating using the Network Voice Protocol (NVP) (See Figure #6).

Figure #6

Next we look at another snort dump file from a compromised windows box that was communicating with an IRC server. What IRC servers did the server at 172.16.134.191 communicate with?

Look for TCP connections originating from the server toward the outside and we can use tcpdump with a filtering expression to capture SYN/ACK packets incoming from outside servers.

# tcpdump -n -nn -r snort_log 'tcp and dst host 172.16.134.191 and tcp[13]==18'

This produces a long list of connections going from 172.16.134.191 to outside connections. (see Figure #7).

Figure #7

Now we know that IRC communicate on port 6666 to 6669 so let’s add that and narrow down the search with the following command.
# tcpdump -n -nn -r snort_log 'tcp and dst host 172.134.16.234 and tcp[13]==18' and portrange 6666-6669 (See output in Figure #8 below)

Figure #8

Now we have narrowed the list down to 3 IP’s that were communicating with the server using IRC.


Tcpdump is a wonderful, general-purpose packet sniffer and incident response tool that should be in your tool shed.

More Stories By David Dodd

David J. Dodd is currently in the United States and holds a current 'Top Secret' DoD Clearance and is available for consulting on various Information Assurance projects. A former U.S. Marine with Avionics background in Electronic Countermeasures Systems. David has given talks at the San Diego Regional Security Conference and SDISSA, is a member of InfraGard, and contributes to Secure our eCity http://securingourecity.org. He works for Xerox as Information Security Officer City of San Diego & pbnetworks Inc. http://pbnetworks.net a Service Disabled Veteran Owned Small Business (SDVOSB) located in San Diego, CA and can be contacted by emailing: dave at pbnetworks.net.

@ThingsExpo Stories
What happens when the different parts of a vehicle become smarter than the vehicle itself? As we move toward the era of smart everything, hundreds of entities in a vehicle that communicate with each other, the vehicle and external systems create a need for identity orchestration so that all entities work as a conglomerate. Much like an orchestra without a conductor, without the ability to secure, control, and connect the link between a vehicle’s head unit, devices, and systems and to manage the ...
"Once customers get a year into their IoT deployments, they start to realize that they may have been shortsighted in the ways they built out their deployment and the key thing I see a lot of people looking at is - how can I take equipment data, pull it back in an IoT solution and show it in a dashboard," stated Dave McCarthy, Director of Products at Bsquare Corporation, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
IoT solutions exploit operational data generated by Internet-connected smart “things” for the purpose of gaining operational insight and producing “better outcomes” (for example, create new business models, eliminate unscheduled maintenance, etc.). The explosive proliferation of IoT solutions will result in an exponential growth in the volume of IoT data, precipitating significant Information Governance issues: who owns the IoT data, what are the rights/duties of IoT solutions adopters towards t...
Businesses and business units of all sizes can benefit from cloud computing, but many don't want the cost, performance and security concerns of public cloud nor the complexity of building their own private clouds. Today, some cloud vendors are using artificial intelligence (AI) to simplify cloud deployment and management. In his session at 20th Cloud Expo, Ajay Gulati, Co-founder and CEO of ZeroStack, will discuss how AI can simplify cloud operations. He will cover the following topics: why clou...
As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...
Everyone knows that truly innovative companies learn as they go along, pushing boundaries in response to market changes and demands. What's more of a mystery is how to balance innovation on a fresh platform built from scratch with the legacy tech stack, product suite and customers that continue to serve as the business' foundation. In his General Session at 19th Cloud Expo, Michael Chambliss, Head of Engineering at ReadyTalk, discussed why and how ReadyTalk diverted from healthy revenue and mor...
The many IoT deployments around the world are busy integrating smart devices and sensors into their enterprise IT infrastructures. Yet all of this technology – and there are an amazing number of choices – is of no use without the software to gather, communicate, and analyze the new data flows. Without software, there is no IT. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, Dave McCarthy, Director of Products at Bsquare Corporation; Alan Williamson, Principal...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
"IoT is going to be a huge industry with a lot of value for end users, for industries, for consumers, for manufacturers. How can we use cloud to effectively manage IoT applications," stated Ian Khan, Innovation & Marketing Manager at Solgeniakhela, in this SYS-CON.tv interview at @ThingsExpo, held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
Successful digital transformation requires new organizational competencies and capabilities. Research tells us that the biggest impediment to successful transformation is human; consequently, the biggest enabler is a properly skilled and empowered workforce. In the digital age, new individual and collective competencies are required. In his session at 19th Cloud Expo, Bob Newhouse, CEO and founder of Agilitiv, drew together recent research and lessons learned from emerging and established compa...
Financial Technology has become a topic of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 20th Cloud Expo at the Javits Center in New York, June 6-8, 2017, will find fresh new content in a new track called FinTech.
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Extracting business value from Internet of Things (IoT) data doesn’t happen overnight. There are several requirements that must be satisfied, including IoT device enablement, data analysis, real-time detection of complex events and automated orchestration of actions. Unfortunately, too many companies fall short in achieving their business goals by implementing incomplete solutions or not focusing on tangible use cases. In his general session at @ThingsExpo, Dave McCarthy, Director of Products...
"At ROHA we develop an app called Catcha. It was developed after we spent a year meeting with, talking to, interacting with senior citizens watching them use their smartphones and talking to them about how they use their smartphones so we could get to know their smartphone behavior," explained Dave Woods, Chief Innovation Officer at ROHA, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2017 New York. The 20th Cloud Expo and 7th @ThingsExpo will take place on June 6-8, 2017, at the Javits Center in New York City, NY. "The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it," stated Roger Strukhoff. "More importantly, it leverages the power of devices and the Internet to enable us all to im...
We are always online. We access our data, our finances, work, and various services on the Internet. But we live in a congested world of information in which the roads were built two decades ago. The quest for better, faster Internet routing has been around for a decade, but nobody solved this problem. We’ve seen band-aid approaches like CDNs that attack a niche's slice of static content part of the Internet, but that’s it. It does not address the dynamic services-based Internet of today. It does...