Emotet Malware: Email Spoofer Awakening

According to IBM X-Force, the Emotet malware has recently been spreading in Germany and Japan, targeting companies in the area more and more aggressively.

Emotet is a banking Trojan spread by macro-enabled email attachments that contain links to malicious sites. It functions primarily as a downloader for other malware, namely the TrickBot Trojan and Ryuk ransomware. Due to its polymorphic nature, it can evade traditional signature-based detection methods, making it particularly difficult to combat. Once it has infiltrated a system, it infects running processes and connects to a remote C&C server to receive instructions, run downloads and upload stolen data (us-cert.gov).

Traditionally, Emotet has been using corporate billing notifications for disguise, often mimicking the branding of reputable institutions to make itself appear legitimate. This strategy allowed it to target victims in the USA (52 % of all attacks), Japan (22 %) and countries of the EU (japan.zdnet.com). An incident that took place in December 2019 caused the city of Frankfurt, home of the European Central Bank, to shut down its network (zdnet.com).

In Japan, however, the malware has been acting with much greater aggression compared to the past years. Increased activity was reported in late 2019, and recently, following the coronavirus outbreak in China, Emotet had a change of tactics and has now been spreading throughout Japan in the form of fake public health warnings with disturbing reports of coronavirus cases in the Gifu, Osaka and Tottori prefectures (IBM X-Force Exchange).

This is a good illustration of what makes this type of malware so dangerous – not only is it resistant to detection by signature-based methods, it also manipulates basic human emotion to disseminate itself.

Protection against Emotet therefore requires more complex measures. Besides well-informed prevention, an effective way to cope with it is to rely on behavior analysis seeking indicators of compromise (IoC). In Flowmon’s case, this takes the form of the InformationStealers behavior pattern (BPattern), which exists as a standard detection method in Flowmon ADS and describes the symptoms of Emotet’s presence in the network.

BPatterns can be thought of as a kind of description of how different malicious actors manifest themselves in the network. They allow the system to discern threats from other activity as it monitors and critically assesses the traffic. Unlike traditional signatures, BPatterns do not look for a particular piece of code and thus retain their ability to identify threats even as they transform and progress through their life-cycle.

According to an analysis published by Fortinet, Emotet uses 5 URLs to download payload and 61 hard-coded C&C servers (fortinet.com/blog). This information is included in the BPattern, and is used by the system to recognize the infection and contain it before it can spread. For an added layer of protection, there is a BPattern for TrickBot as well (TOR_Malware). Both patterns are periodically updated depending on how the Trojans evolve and are delivered to users as part of regular updates. It was Flowmon’s partner Orizon Systems that alerted us to the increased incidence of the Emotet malware and prompted the most recent update.

But no protection is infallible and everyone is advised to keep several layers of cyber protection in place and up to date – including antivirus, IoC detection on firewalls, intrusion detection systems (IDS) and behavioral analysis on the network. Because Emotet spreads by spoofed email, users should exercise caution opening attachments, especially those who come in daily contact with bills and documents from outside parties, and report any suspicious or unusual email to the security team.

To learn more about threat detection using Flowmon ADS, contact us for more information or try a demo.

The Effect of Packet Loss on an IDS Deployment

At SuriCon 2019, Eric Leblond and Peter Manev – both of whom are key contributors in the Suricata community – presented important test results, emphasizing the implications of packet loss. Let’s dig a little deeper into the importance of zero packet loss in an IDS deployment.

The effect of packet loss on a variety of network analysis gear varies widely based on the function the analysis device is performing. The measurement accuracy of network and/or application performance monitoring devices is affected when packets are dropped by the network sensor. In the case of a cybersecurity appliance, like a Suricata-based Intrusion Detection System (IDS), missed intrusion alerts are directly affected by system packet loss. This is also the case for file extraction.

The effect of packet loss on intrusion alert generation

No matter how good the IDS rule, if all packets for a given session are not delivered to the IDS, alerts can be missed. This is mainly due to how an IDS processes a network session. The given packets that are dropped within the session will determine if the IDS has enough data to generate an alert. In many cases, the IDS will drop the entire session if key packets are missing. A missed alert could mean that an intrusion attempt went undetected.

To measure the missed alerts, the following methodology was used:

  • Traffic source is a PCAP file containing actual network traffic with malicious activity
  • PCAP file is processed to simulate specified random packet loss
  • Suricata alerts are compared for original PCAP file to modified file

Sample numbers:

  • 10% missed alerts with 3% packets loss
  • 50% missed alerts with 25% packets loss

The effect of packet loss on IDS file extraction

Part of deploying a successful IDS strategy is to also automate file extraction. Most IDS engines support HTTP, SMTP, FTP, NFS, and SMB protocols. The file extractor runs on top of the protocol parser, which handles dechunking and unzipping the request and/or any response data if necessary. Loss of a single packet for a network stream carrying a file in most cases will cause file extraction to fail.

Sample numbers:

  • 10% failed file extraction with 4% packet loss
  • 50% failed file extraction with 5.5% packet loss

In conclusion, the test results show how important zero packet loss is to a successful IDS deployment. FPGA SmartNIC features like on board burst buffering, DMA, and optimized PCI express performance will minimize or completely eliminate packet loss in a standard server-based IDS.

Ensuring performance resilience with deduplication

Performance resilience is the ability to ensure the performance of your commercial or home-made appliance in any data center environment. In other words, to ensure that your performance monitoring, cybersecurity or forensics appliance is resilient to common data center issues, such as badly configured networks, inability to specify desired connection type, time sync, power, space, etc.

 

In this blog, we will look at deduplication and how support of deduplication in your SmartNIC ensures performance resilience when data center environments are not configured properly – router and switch SPAN ports specifically.

 

Assume the worst

 

When designing an appliance to analyze network data for monitoring performance, cybersecurity or forensics, it is natural to assume that the environments where your appliance will be deployed are configured correctly and adhere to best practices. It is also fair to assume that you can get the access and connectivity you need. Why would someone go to the trouble of paying for a commercial appliance or even fund the development of an appliance in-house, if they wouldn’t also ensure that the environment meets minimum requirements?

 

Unfortunately, it is not always like that, as many veterans of appliance installments will tell you. This is because the team responsible for deploying the appliance is not always the team responsible for running the data center. Appliances are not their first priority. So, what happens in practice, is that the team deploying the appliance is told to install the appliance in a specific location with specific connectivity, and that is that. You might prefer to use a tap, but that might not be available, so you need to use a Switched Port Analyzer (SPAN) port from a switch or router for access to network data.

 

While this might seem acceptable, it can lead to some unexpected and unwanted behavior that is responsible for those grey hairs on the heads of veterans! An example of this unwanted behavior is duplicate network packets.

 

How do duplicate packets occur?

 

Ideally, when performing network monitoring and analysis, you would like to use a tap to get direct access to the real data in real time. However, as we stated above, you can’t always dictate that and sometimes have to settle for connectivity to a SPAN port.

 

The difference between a tap and a SPAN port is that a tap is a physical device that is installed in the middle of the communication link so that all traffic passes through the tap and is copied to the appliance. Conversely, a SPAN port on a switch or router receives copies of all data passing through the switch, which can then be made available to the appliance through the SPAN port.

 

When configured properly, a SPAN port works just fine. Modern routers and switches have become better at ensuring that the data provided by SPAN ports is reliable. However, SPAN ports can be configured in a manner that leads to duplicate packets. In some cases, where SPAN ports are misconfigured, up to 50% of the packets provided by the SPAN port can be duplicates.

 

So, how does this occur? What you need to understand with respect to SPAN ports is that when a packet enters the switch on an ingress port, a copy is created – and when it leaves a switch on an egress port, another copy is created. In this case, duplicates are unavoidable. But it is possible to configure the SPAN to only create copies on ingress or egress from the switch, thus avoiding duplicates.

 

Nevertheless, it is not uncommon to arrive in a data center environment where SPAN ports are misconfigured and nobody has permission to change the configuration on the switch or router. In other words, there will be duplicates and you just have to live with it!

 

What is the impact of duplicates?

 

Duplicates can cause a lot of issues. The obvious issue is that double the amount of data requires double the amount of processing power, memory, power, etc. However, the main issue is false positives: errors that are not really errors or threats that are not really threats. One common way that duplicates affect analysis is by an increase in TCP out-of-order or retransmission warnings. Debugging these issues takes a lot of time, usually time that an overworked, understaffed network operations or security team does not have. In addition, any analysis performed on the basis of this information is probably not reliable, so this only exacerbates the issue.

 

How to achieve resilience

 

With deduplication built-in via a SmartNIC in the appliance, it is possible to detect up to 99.99% of duplicate packets produced by SPAN ports. Similar functionality is available on packet brokers, but for a sizeable extra license fee. On Napatech SmartNICs, this is just one of several powerful features delivered at no extra charge.

 

The solution is ideal for situations where the appliance is connected directly to a SPAN port, dramatically reducing the amount of damage that duplicates can cause. But, it also means that the appliance is resilient to any SPAN misconfigurations or other network architectural issues that can give rise to duplicates – without relying on other costly solutions, such as packet brokers, to provide the necessary functionality to complete the solution.

50% data reduction with built-in deduplication

The challenge: more than 50% copies

Duplicate packets are a major burden for today’s network monitoring and security applications. In worst cases, more than 50% of the received traffic is sheer replication. This not only adds excessive pressure in terms of bandwidth, processing power, storage capacity and overall efficiency. It also places severe strain on operations and security teams as they end up wasting valuable time chasing false negatives. Napatech’s intelligent deduplication capabilities solve this by identifying and discarding any duplicate packets, thus enabling up to a 50% reduction in application data load.

 

Misconfigured SPAN ports

For passive monitoring and security applications, duplicate packets can make up more than 50% of the total traffic volume. This is partly due to TAP and aggregation solutions collecting packets from multiple points in the network – and partly due to misconfigured SPAN ports; a much too common issue in today’s datacenters.

 

Solution: intelligent deduplication

With deduplication built in via a SmartNIC in the applicance, it is possible to detect all duplicate packets. By analyzing and comparing incoming packets with previously received/stored data, deduplication algorithms discard any replicas, thus easing the burden on the system and greatly optimizing Performance.

Significant cost benefits

By adding deduplication in hardware via a Napatech SmartNIC, significant cost benefits can be achieved at various levels:

  1. At a performance level
    For the vast majority of capture deployments, deduplication will dramatically save system resources. By efficiently discarding redundant copies, deduplication can reduce the processing load, PCIe transfer, system memory and disk space requirements by as much as 50%.
  2. At an operational level
    At an operational level, the main issue with duplicate packets is that they distort the overview. But with deduplication, operations and security teams avoid wasting valuable time investigating false positives.
  3. At an application level
    Similar functionality is available on network packet brokers, but for a sizeable extra license fee. On Napatech SmartNICs, deduplication is just one of several powerful features delivered at no extra charge.


Key features

  • Deduplication in hardware up to 2x100G
  • Deduplication key calculated as a hash over configurable sections of the frame
  • Dynamic header information (e.g. TTL) can be masked out from the key calculation
  • Deduplication can be enabled/disabled per network port or network port group
  • Configurable action per port group: Discard or pass duplicates / Duplicate counters per port group
  • Configurable deduplication window: 10 microseconds – 2 seconds

Want to reduce data duplication by as much as 50%? Contact us today!

 

Introducing flow formats and their differences

Introducing flow formats and their differences

 

There are multiple flow formats. What are the differences? Which are supported by Flowmon? Check the post to see the answers.

Flow monitoring has become the prevalent method for monitoring traffic in high-speed networks. Several standards of flow format exist and it can be tricky to choose the right one for your needs. In this article we will go through the most common flow formats, providing a basic overview of their history and differences.

Flow monitoring history

The history of flow monitoring goes back to 1996 when the NetFlow protocol was patented by Cisco Systems. Flow data represents a single packet flow in the network with the same 5-tuple identification composed of source IP address, destination IP address, source port, destination port and protocol. Based on this, packets are aggregated into flow records that accumulate the amount of transferred data, the number of packets and other information from the network and transport layer. A typical flow monitoring setup consists of three main components:

Flow exporter – create flow records by aggregating packet information and exports the records to one or more flow collectors (eg. Flowmon Probe).

Flow collector – collects and stores the flow data (eg. Flowmon Collector).

Analysis application – allows the visualization and analysis of the received flow data (eg. Flowmon Monitoring Center – native application of the Flowmon Collector).

Cisco originally developed the protocol for its products. Other manufacturers have followed such an approach and have developed more or less similar proprietary flow data formats.

Cisco standards

NetFlow v5

The first widely adopted version was NetFlow v5, which became available in 2006. NetFlow v5 is still the most common version, and it is supported by a wide range of routers and switches. However, it no longer meets the needs for accurate flow monitoring as it does not support IPv6 traffic, MAC addresses, VLANs or other extension fields.

NetFlow v9

NetFlow v9 brought several added improvements. The most important is support for templates, which allow a flexible flow export definition and ensures that NetFlow can be adapted to provide support for new protocols. Other improvements are the support for IPv6, Virtual Local Area Networks (VLANs) and Multiprotocol Label Switching (MPLS) and other features. NetFlow v9 is supported on most of the recent Cisco routers and switches.

Flexible NetFlow

Cisco still continues to improve NetFlow technology. The next generation is called Flexible NetFlow, and it further extends NetFlow v9. What Flexible NetFlow can export is highly customizable, which allows customers to export almost anything that is passing through the router.

Other vendors

jFlow, NetStream, cflowd

All standards mentioned above are similar to the original Cisco NetFlow standard. jFlow was developed by Juniper networks, NetStream by Huawei and cflowd by Alcatel-Lucent.

Independent standard

IPFIX

The proposal for IPFIX (Internet Protocol Flow Information eXport) protocol was published by the IETF in 2008. IPFIX is derived from NetFlow v9 and should serve as a universal protocol for exporting flow information from network devices to a collector or Network Management System. The IPFIX is more flexible than NetFlow and allows to extend flow data with additional information about network traffic. As an example, our Flowmon IPFIX extensions enrich the IPFIX flow data with application layer protocol metadata, network performance statistics and other information.

In Cisco world IPFIX is usually referred to as NetFlow v10 and provides various extensions similar to Flowmon.

Related standards

NSEL

NSEL (NetFlow Security Event Logging) allows exporting Flow Data from Cisco’s ASA family of security devices. It has a similar format as NetFlow, but requires a different interpretation and has different use-cases – the purpose of NSEL is to track firewall events and logs via NetFlow. Unfortunately, sometimes people got confused by the terminology and consider NSEL compatible with NetFlow. In fact, with NSEL there is not enough information to provide traffic charts or support detailed drill downs and troubleshooting.

sFlow

Unlike NetFlow, sFlow is based on sampling. An sFlow agent obtains traffic statistics using sFlow sampling, encapsulates them into sFlow packets, which are then sent to the collector. sFlow provides two sampling modes – flow and counter sampling:

Flow sampling where the sFlow agent samples packets in one direction or both directions on an interface based on the sampling ratio, and parses the packets to obtain information about packet data content

Counter sampling where the sFlow agent periodically obtains traffic statistics on an interface

Flow sampling focuses on traffic details to monitor and parse traffic behaviors on the network while counter sampling focuses on general traffic statistics.

Due to packet sampling it is however not possible to have an accurate representation of the traffic and some traffic will be missed. Therefore, sampling can limit usage of flow data in cases like network anomaly detection. On the other hand, it can be used for top statistics or DDoS attack detection. Cisco has introduced very similar technology to sFlow which is called NetFlow Lite.

What formats does Flowmon support?

Our standalone Probe allows exporting flow data in NetFlow v5/v9 and IPFIX format. Additionally, the Probe can use the Flowmon IPFIX extension that allows enriching the flow data with additional information, such as network performance statistics (for example, Round-Trip Time, Server Response Time and Jitter) and information from the application protocols (HTTP, DNS, DHCP, SMB, E-mail, MSSQL and others).

The Flowmon Collector can process network traffic statistics from various sources and flow standards, including:

  • NetFlow v5/v9
  • IPFIX
  • NetStream
  • jFlow
  • cflowd
  • sFlow, NetFlow Lite

Conclusion: which flow format to use?

We have introduced the most common flow formats. Although the format you can use depends on your network infrastructure, from our experience in implementing high-performance network monitoring appliances, we highly recommend using NetFlow v9/IPFIX export formats, as they provide the most accurate and comprehensive information.