Blog

Integrated Cyber Network Intelligence : Why would you need Granular Network Intelligence?

“Advanced targeted attacks are set to render prevention-centric security strategies obsolete and that information must become the focal point for our information security strategies.” (Gartner)

In this webinar we take a look at the internal and external threat networks pervasive in todays enterprise and explore why organizations need granular network intelligence.

Webinar Transcription:

I’m one of the senior engineers here with NetFlow Auditor. I’ll be taking you through the webinar today. It should take about 30 to 40 minutes, I would say and then we will get to some questions towards the end. So let’s get started.

So the first big question here is, “Why would you need something like this? Why would you need Granular Network Intelligence?” And the answer, if not obvious already, is that, really, in today’s connected world, every incident response includes a communications component. What we mean by that is in a managed environment, whether it’s traditional network management or security management, anytime that there’s an alert or some sort of incident that needs to be responded to, a part of that response is always going to be communications, who’s talking to who, what did they do, how much bandwidth did they use, who did they talk to?

And in a security particular environment, we need to be looking at things like whether external threats or internal threats, was there a data breach, can I look at the historical behavior or patterns, can I put this traffic into context as per the sort of baseline of that traffic? So that insight into how systems have communicated is critical.

Just some background industry kind of information. According to Gartner, targeted attacks are set to render prevention-centric security strategies obsolete by 2020. Basically, what that means is there’s going to be a shift. They believe there’s going to be a shift to information and end-user-centric security focused on an infrastructure’s end-points and away from sort of the blocking and tackling of firewalls. They believe that there’ll be three big trends continuous compromise, meaning that an increased in level of advanced attacks, targeted attacks. It’s not going to stop. You’re never going to feel safe that someone won’t be potentially trying to attack you.

And most of those attacks will become financially motivated attacks, attempts to steal information and attempts to gather credit card data, if you have that, intellectual property, ransomware-type attacks. So this is not necessarily, “Hey, I’m just going to try and bring down your website or something,” in a traditional world where maybe people are playing around a little bit. This is more organized attacks specifically designed to either elicit a ransom or a reward or just steal information that could be turned into money out in a black market and it’s going to be more and more difficult for IT to have control over those end-user’s devices.

Again, very few organizations just have people sitting at their desks with desktop computers anymore. Everybody’s got laptops. Everybody’s got a phone or other tablet that’s moving around. People work from home. They work from the road. They’re connecting in to network resources from anywhere in the world at any time and it becomes more and more challenging for IT to sort of control those pathways of communications. So if you can’t control it, then you have to certainly be able to monitor it and react to it and the reaction is really in three major ways; determining the origin of the attack, the nature of the attack, and the damage incurred.

So we’re certainly assuming that there are going to be attacks, and we need to know where they’re coming from, what they’re trying to do, and have they been able to get there? You know, have we caught it in time or has something already been infected or has information been taken away from the network and that really leads us into this little graphic that we have about not being in denial. Understanding that, unfortunately, many people, in terms of their real visibility into the network, are somewhere in the blind or limited-type area. They don’t know what they don’t know, they think they should know but they don’t know, and etc.

But where they really need to be is at, “There’s nothing they don’t know.” And they need tools to be able to move them from wherever they are into this upper left-hand quadrant and certainly, that’s what our product is designed to do. So just kind of looking at the entire landscape of information flow from outside and inside and really understanding that there are new kinds of attacks, crawlers, botnets, ransomware, ToR, DoS and DDoS attacks that have been around for a while.

Your network may be used to download or host illicit material, leak intellectual property, be part of an attack, you know, something that’s command and controlled from somewhere else and your internal assets have become zombies and are being controlled by outside. There are lots of different threats. They’re all coming at you from all over the place. They’re all trying to get inside your network to do bad things and those attacks or that communication needs to be tracked.

Gartner also believes that 60% of enterprise security budgets will be allocated for rapid detection and response by 2020, up from less than 10% just a few years ago. What they believe is that too much of the spending has gone into prevention and not enough has gone into monitoring and response. So the prevention is that traditional firewalling, intrusion detection or intrusion prevention, things like that, which certainly is important. I’m not saying that those things aren’t useful or needed. But what we believe and what other industry analysts certainly believe is that that’s not enough, basically. There needs to be more than the simple sort of “Put up a wall around it and no one will be able to get in” kind of situation. If that were the case, then there would be no incidents anywhere because everybody’s got a firewall; large companies, small companies. Everybody’s got that today, and yet, you certainly don’t go more than a couple of days without hearing about new hacks, new incidents.

Here in the United States, we just came through an election where they’re still talking about people from other countries hacking into one party or another’s servers to try and change the election results. You know, on the enterprise side, there are lots and lots of businesses. Yahoo recently in the last couple of months certainly had a major attack that they had to come clean about it and of course both of those organizations, certainly Yahoo, you know, they’re an IT system. They have those standard intrusion prevention and firewall-type systems, but obviously, they aren’t enough.

So when you are breached, you need to be able to look and see what happened, “What can I still identify, what can I still control, and how do I get visibility as to what happened.” So for us, we believe that the information about the communication is the most important focal point for a security strategy and we can look at a few different ways to do that without a signature-based mechanism. So there’s ways to look at normal traffic and be able to very rapidly identify deviation from normal traffic. There’s ways to find outliers and repeat offenders. There’s ways to find nefarious traffic by correlating real-time threat feeds with current flows and we’re going to be talking about all of these today so that a security team can identify what was targeted, what was potentially compromised, what information may have left the building, so to speak.

There’s a lot of challenges faced by existing firewalls, SIEM, and loosely-coupled toolsets. The level of sophistication, it’s going up and up again. It’s becoming more organized. It’s an international crime syndicate with very, very intelligent people using these tactics to try and gain money. As we’ve talked about, blocking attack, laying end-point solutions are just not enough anymore and of course, there’s a huge cost in trying to deploy, trying to maintain multiple solutions.

So being able to try and have some tools that aren’t incredibly expensive, that do give you valuable information really, can become the best way to go. If you look at, say, what we’re calling sensors; packet captures, DPI-type systems. They, certainly, can do quite a lot, but they’re incredibly expensive to deploy across a large organization. If you’re trying to do packet capture, it’s very, very prohibitive. You can get a lot of detail, but trying to put those sensors everywhere is just… unless you’ve got an unlimited budget, and very few people do, that becomes a really difficult proposition to swallow.

But that doesn’t mean NetFlow can’t still use that kind of information. What we have found and what’s really been a major trend over the last couple of years is that existing vendors, on their devices, Check Point, Cisco, Palo Alto, packet brokers like Ixia, or all of the different people that you see up here, and more and more all the time, are actually adding that DPI information into their flow data. So it’s not separate from flow data. It’s these devices that have the packets going through them that can look at them all the way to layer seven and then include that information in the NetFlow export out to a product like ours that can collect it and display that.

So you can look into payload and classify according to payload content identifying traffic on port 80 or what have you, that you can connect the dots between inside and outside when there’s NAT. To be able to read the URLs and quickly analyze where they’re going and what they’re being used for. Getting specialized information like MAC address information or, if it’s a firewall, getting denial information or AAA information, if it’s a wireless LAN controller, getting SSID information, and other kinds of things that can be very useful to track down where people were talking.

So different types of systems are adding different kinds of information to the exports, but all of them, together, really effectively give you that same capability as if you had those sniffing products all over the place or packet capture products all over the place. But you can do it right in the devices, right from the manufacturer, send it through NetFlow, to us, and still get that quality information without having to spend so much money to do it.

The SANS organization, if you’re not familiar with them, great organization, provide a lot of good information and whitepapers and things like that. They have, very often, said that NetFlow might be the single most valuable source of evidence in network investigations of all sorts, security investigations, performance investigations, whatever it may be.

The NetFlow data can give you very high value intelligence about the communications. But the key is in understanding how to get it and how to use it. Some other benefits of using NetFlow, before packet capture is the lack of need for huge storage requirements. Certainly, as compared to traditional packet capture, NetFlow is much skinnier than that and you can store much longer-term information than you could if you had to store all of the packets. The cost, we’ve talked about.

And there are some interesting things like legal issues that are mitigated. If you are actually capturing all packets, then you may run into compliance issues for things like PCI or HIPAA. In certain different countries and jurisdictions around the world have very strict regulations about maintaining the end-data and keeping that data. NetFlow, you don’t have that. It’s metadata. Even with the new things that you can get, that we talked about a couple of slides ago, it’s still the metadata. It’s still data about the data. It’s not the actual end information. So even without that content, NetFlow still provides an excellent means of guiding the investigations, especially in an attack scenario.

So here, if you bundle everything that we’ve talked about so far into one kind of view and relate it to what we do here at NetFlow Auditor. You would see it on this screen. There are the end-users of people/content and things today, the Internet of things. So you’ve got data coming from security cameras and Internet-connected vehicles and refrigerators. It could be just about anything, environmental-type information. It’s all producing data. That data is traversing the network through multiple different types of platforms, or routers, switches, servers, wireless LAN controllers, cloud-based systems and so forth, all of which can provide correlation of the information and data. We call that the correlation API.

We then take that data into NetFlow Auditor. We combine it with outside big data, we’re going to talk about that in a minute, so not only the data of the connections but actual third-party information that we have related to known bad actors in the world and then we can use that information to provide you, the user, multiple benefits, whether it’s anomaly detection, threat intelligence, security performance, network accounting, all of the sort of standard things that you would do with NetFlow data.

And then lastly, integrate that data out to other third-party systems, whether it’s your managed service provider or security service provider. It could be upstream event collectors, trappers, log systems, SOAPA ecosystems, whether that’s on-premise or in the cloud or hybrid cloud. All of that is available via our product. So it starts at the traffic level. It goes through everything. It provides the data inside our product and as well as integrates out to third-party systems.

So let’s actually look into this a little more deeply. So the threat intelligence information is one of the two major components of our cyber security areas. One, the way this works is that threat data is derived from a large number of sources. So we maintain a list, effectively, a database of known bad IP addresses, known bad actors in the world. We collect that data through honeypots, and threat feeds, and crowd sources, and active crawlers, and our own internal user cyber feedback from our customers and all of that information combined allows us to maintain a very robust list of known bads, basically. Then we can combine that cyber intelligence data with the connection data, the flow data, the session data, inside and outside of your network, you know, the communications that you’re having, and compare the two.

So we have the big data threats. We can process that data along with what’s happening locally in your network to provide extreme visibility, to find who’s talking to who, what conversations are your users having with bad actors, ransomware, botnets, ToR, hacking, malware, whatever it may be and we then provide, of course, that information to you directly in the product. So we’re constantly monitoring for that communication and then we can help you identify it and remediate it as soon as possible.

As we look into this a little bit   zoomed in here a little bit, you can see that that threat information can be seen in summary or in detail. We have it categorized by different threat levels, types, severities, countries of origin, affected IPs, threat IPs. As anyone who’s used our product in the past knows, we always provide an extreme amount of flexibility to really slice and dice the data and give you a view into it in any way that is best consumed by you. So you can look at things by type, or by affected IP, or by threat IP, or by threat level, or whatever it may be and of course, no matter where you start, you can always drill in, you can filter, you can re-display things to show it in a different view.

Here’s an example of identifying some threat. These are ransomware threats, known ransomware IPs out there. I can very easily just right-click on that and say, “Show me the affected IP.” So I see that there’s ransomware. Who’s affected by that? Who is actually talking to that? And it’s going to drill right down into that affected IP or maybe multiple affected IPs that are known to be talking to those ransomware systems outside. You could see when it happened. You can see how much traffic.

Certainly, in this example our top affected IP here certainly has a tremendous amount of data, 307 megs over that time period, much more than the next ones below that and so that’s clearly one that needs to be identified or responded to very quickly. It can be useful to look at this way, to see if, “Hey,” you know, “Is this one system that’s been infiltrated or is it now starting to spread? Are there multiple systems? Where is it starting? Where is it going and how can I then sort of stem that tide?” It very easy to get that kind of information.

Here’s another example showing all ransomware attack, traffic, traversing a large ISP over a day. So whether you’re an end-user or certainly a service provider, we have many, many service provider customers that use this to monitor their customer’s traffic and so this could be something that you look at to say “Across all of my ISP, where is that ransomware traffic going? Maybe it’s not affecting me but it’s affecting one of my customers.” Then we can be able to drill into that and to alert and alarm on that, potentially block that right away as extra help to my customers.

Ransomware is certainly one of the most major scary sort of things that’s out there now. It’s happening every day. There are reports of police stations having to pay ransom to get their data back, hospitals having to pay ransom to get their data back. It’s kind of interesting that, to our knowledge, there has never been a case where the ransomers, the bad guys out there haven’t actually released the information back to their customers and supply the decryption key. Because they want the money and they want people to know, “Hey, if you pay us, we will give you your data back,” which is really, really frightening, actually. It’s happening all the time and needs to be monitored very, very carefully. This is certainly one of the major threats that exist today.

But there are other threats as well; peer-to-peer traffic, ToR traffic, things like that. Here’s an example of looking at a single affected IP that is talking to multiple different threat IPs that are known to have been hosting illicit content over this time period. You could see that, clearly, it’s doing something. You know, if there is one host that is talking to one outside illicit threat IP, okay, maybe that’s a coincidence or maybe it’s not an indication of something crazy going on. But when you can see that, in this case, there’s one internal IP talking to 89 known bad threat IPs who have been known to host illicit traffic, okay, that’s not a coincidence anymore. We know that something’s happening here. We can see when it happened. We know that they’re doing something. Let’s go investigate that. So that’s just another way of kind of giving you that first step to identify what’s happening and when it’s happening.

You know, sometimes, illicit traffic may just look like some obscured peer-to-peer content but it actually…Auditor, our product allows you to see it for full forensic evidence. You know, you could see what countries are talking to, what kind of traffic it is what kind of threat level it is. It really gives you that full-detailed data about what’s happening.

Here’s another example of a ToR threat. So people who are trying to use ToR to anonymize their data or get around any kind of traffic analysis-type system will use ToR to try and obfuscate that data. But we have, as part of our threat data, a list of ToR exits and relays and proxies, and we can look at that and tell you, again, who’s sending data into this sort of the ToR world out there, which may be an indication of ransomware and other malware because they often use ToR to try and anonymize that data. But it, also, could be somebody inside the organization that’s trying to do something they shouldn’t be doing, get data out which could be very nefarious. You never want to think the worst of people but it does happen. It happens every day out there. So again, that’s another way that we can give you some information about threats.

We, also, can help you visualize the threats. Sometimes, it’s easier for those to understand by looking at a nice graphical depiction. So we can show you where the traffic is moving, with the volume of traffic, how it’s hopping around in, in this case a ToR endpoint. ToR is weird. The point of ToR is that it’s very difficult to find an endpoint from another single endpoint. But being able to visualize it together actually allows you to kind of get a hand on where that traffic may be going.

In really large service providers where, certainly, people who are interested in tracking this stuff down, they need a product that can scale. We’ve got a very, very great story about our massive scalability. We can use a hierarchical system. We can add additional collectors. We can do a lot of different things to be able to handle a huge volume of traffic, even for Tier 1-type service providers, and still provide all of this data and detail that we’ve shown so far.

A couple other examples, we just have a number of them here, of different ways that you can look at the traffic and slice and dice it. Here’s an example of top conversations. So looking for that spike in traffic, we could see that there was this big spike here, suddenly. Almost 200 gig in one hour, that’s very unusual and can be identified very, very quickly and then you can try and say, “Okay, what were you doing during that time period? How could it possibly be that that much information was being sent out the door in such a short period of time?”

We also have port usage. So we can look at individual ports that are known threats over whatever time period you’re interested in. We could see this is port 80 traffic but it’s actually connecting to known ToR exits. So that is not just web surfing. You can visualize changes over time, you can see how things are increasing over time, and you can identify who is doing that to you.

Here’s another example of botnet forensics. Understanding a conversation to a known botnet command and control server and so many times, those come through, initially, as a phishing email. So they’ll just send millions of spam emails out there hoping for somebody to click on it. When they do click on it, it downloads the command and control software and then away it goes. So you can actually kind of see the low-level continual spam happening, and then all of a sudden, when there’s a spike, you actually get that botnet information, the command and control information that starts up and from there all kinds of bad things can happen.

So identifying impacted systems that have more than one infection is a great way to really sort of prioritize who you should be looking at. We can give you that data. I could see this IP has got all kinds of different threats that it’s been communicating to and with. You know, that is certainly someone that you want to take a look at very quickly.

I talked about visualization, some. Here are a few more examples of visualizations in the product. Many of our customers use this. It’s kind of the first way that they look at the data and then drill into the actual number part of the data, sort of after the visualization. Because you could see, from a high-level, where things are going and then say, “Okay, let me check that out.”

Another thing that we do as part of our cyber bundle, if you will, is anomaly detection and what we call “Two-phased Anomaly Detection.” Most of what I’ve talked about so far has been related to threat detection, matching up those known bads to conversations or communications into and out of your network. But there are other ways to try and identify security problems as well. One of those is anomaly detection.

So anomaly detection is an ability of our product to baseline traffic in your network, lots of different metrics on the traffic. So it’s counts, and flows, and packets, and bytes, and bits per second, and so forth, TCP flags, all happening all the time. So we’re baselining all the time, hour over hour, day over day and week over week to understand what is normal and then use our sophisticated behavior-based anomaly detection, our machine learning ability to identify when things are outside the norm.

So phase one is we baseline so that we know what is normal and then alert or identify when something is outside the norm and then phase two is running a diagnostic process on those events, so understanding what was that event, when did it happen, what kind of traffic was involved, what IPs and ports were involved, what interfaces did the traffic go through, what does it possibly pretend, was it a DDoS-type attack, was it port sweeper or crawler-type attack – what was it? And then the result of that is our alert diagnostic screen like you can see in the background.

So it qualifies the cause and impact for each offending behavior. It gives you the KPI information. It generates a ticket. It allows you to integrate with other third-party SNMP traps, trap receivers so we can send our alerts and diagnostic information out as a trap to another system and so everything can be rolled up into a more manager and manager-type system, if you wish. You can intelligently whitelist traffic that is not really offensive traffic that we may have identified as an anomaly. So of course, you want to reduce the amount of false positives out there and we can help you do that.

So to kind of summarize…I think we’re just about at the end of the presentation now. To summarize, what can NetFlow Auditor do in our cyber intelligence? It really comes down to forensics, anomaly detection, and that threat intelligence. We can record and analyze, on a very granular level, network data even in extremely complex, large, and challenging environments. We can evaluate what is normal versus what is abnormal. We can continually monitor and benchmark your network and assets. We can intelligently baseline your network to detect activity that deviates from those baselines. We can continuously monitor for communication with IPs of poor reputation and remediate it ASAP to reduce the probability of infection and we can help you store and compile that flow information to use as evidence in the future.

You’re going to end up with, then, extreme visibility into what’s happening. You’re going to have three-phase detection. You have full alerting and reporting. So any time any of these things do happen, you can get an alert. That alert can be an email. It can be a trap out to another system as I mentioned earlier. Things can be scheduled. They’re running in the background 24/7 keeping our software’s eyes on your network all the time and then give you that forensics drill-down capability to quickly identify what’s happened, what’s been impacted, and how you can stop its spread.

The last thing we just want to say is that everything that we’ve shown today is the result of a large development effort over the last number of years. We’ve been in business for over 10 years, delivering NetFlow-based analytics. We’ve really taken a very heavy development exercise into security over the last few years and we are constantly innovating. We’re constantly improving. We’re constantly listening to what our customers want and need and building that into future releases of the product.

So if you are an existing customer listening to this, we’d love to hear your feedback on what we can do better. If you are potentially a new customer on this webinar, we’d love your ideas from what you’ve seen as to if that fits with what you need or if there’s other things that you would like to see in the product. We really do listen to our customers quite extensively and because of that, we have a great reputation with our customers.

We have a list of customers up here. We’ve got some great quotes from our customers. We really do play across an entire enterprise. We play across service providers and we love our customers and we think that they know that and that’s why they continue to stay with us year after year and continue to work with us to make the product even better.

So we want to thank everybody for joining the webinar today. We’re going to just end on this note that we believe that our products offer the most cost-effective approach to detect threats and quantify network traffic ubiquitously across everything that you might need in the security and cyber network intelligence arena and if you have any interest in talking to us, seeing a demo, live demo of the product, getting a 30-day evaluation of the product, we’re very happy to talk to you. Just contact us.

If you’ve got a salesperson and you want to get threat intelligence, we’re happy to enable it on your existing platform. If you are new to us, hit our website, please, at netflowauditor.com. Fill out the form for a trial, and somebody will get to you immediately and we’ll get you up in the system and running very, very quickly and see if we can help you identify any of these security threats that you may have. So with that, we appreciate your time and look forward to seeing you at our webinar in the future. Bye.

 

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

End Point Threat Detection Using NetFlow Analytics

Webinar Transcription:

Hi, good afternoon everyone. I’m from NetFlow Auditor. Our webinar today is on some of the finer security aspects of our product, specifically Anomaly Detection and End Point Threat Detection. End Point Threat Detection being one of the newer pieces that we’ve added to the system. It should take about half an hour today, and then we’ll let you get back to your day. A reminder that everyone is on ‘mute’ during the presentation. We have a number of attendees here today, and we want to keep down the background noise, so everybody will automatically be muted. However, we encourage questions so, if you do have any, then please use the control panel, there’s a little question tab you can type in your question, and I will see them and respond to them probably towards the end of the webinar.

So, with that we’re going to get started. Again, we appreciate everyone taking the time today to listen to what we have to say and learn about our product, and learn about some of the new features. If you’re on here and you’re an existing customer, that you’ll learn a little bit about one of our new features. So, today we’re going to be talking a lot about security, that’s really the focus of this presentation. NetFlow in general, and NetFlow Auditor in particular can do a lot of things with the data that we have, and one of those things is really focused on being able to identify security threats to your network.

This is obviously very important, right? I mean you literally cannot go a day anymore without hearing of some company, some organization out there that’s been attacked or that has been infiltrated. I was reading about a hospital system recently that was held up by a Ransomware company, and actually had to pay money to unlock their files and this is not a home user, this is not a person who opened up the wrong email and their desktop got under attack or held for ransom. This is a legitimate hospital organization that had that happened to them and so, it really underscores the pervasiveness of these kinds of attacks.

Crawlers, botnets, Ransomware, they’re finding new ways to cause denial of service attacks and other kinds of attacks that can put your business or organization at an extremely high risk and, your network could be used to download or host illicit materials, leak intellectual property. That’s another thing that we’ve seen, this sort of cybercrime. Intellectual property cybercrime where it’s not that they’re just trying to bring down your site or bring down your network, but they’re actually trying to take intellectual property out and again, either hold it for ransom or just sell it or whatever it may be. So, this is certainly an important topic.

There are a number of major challenges for security teams to try and figure out what’s going on and how to lock down that network. The sophistication of the cybercrime organizations out there is just growing and growing. They’re always seemingly one step ahead of the for-profit companies that are trying to block them; the anti-virus companies, firewall companies and so forth. The growing complexity of the infrastructure is making it more difficult, there’s not a single point of entry and exit anymore. You’ve got BYOD, you’ve got lots of wireless, you’ve got VPNs, cloud-based services, you’ve got all kinds of things that people are using today. So it’s not just a lock it down at the firewall and we’re good, it’s really all over the place, and you need to be able to look at the traffic to understand what’s going on.

Of course, it’s very difficult or can be very difficult to retain and analyze that network transaction data across a big organization. Again, you have lots of lots of systems, lots of points of entry and exit, and it can be a challenge to really be able to collect all of that data and be able to use it. Because of that, because of the highly complicated and complex nature of networks, we’ve got this graphic here that talks about the really scary things that are out there. About do you know where things are happening? Do you…? You have certain aspects that you know and that you maybe know that you don’t know, but the really scary stuff is when you don’t know what you don’t know, right? It’s happening or could be happening and you have no idea, and you don’t even know that you should be looking at that, or could be looking at that data to try and understand what’s going on.

But in fact, products like ours and technologies like ours, allow you to, or allow a system to be watching for those unknown unknowns all the time. So, it’s not something that you wake up in the morning and say, “I’m going to go, look at this.” It’s actually happening in the background and looking for you. That machine learning capability is really what makes the new level of systems like ours trying… you know being able to catch up with the sophistication of the attack profiles out there.

When there is an attack or when there is a detection of something, then Incident Response Teams always have to look at that communications component, right? So, they’re going to look at hardware, they’re going to look at software, but they also have to look at the communications. They have to look at historical behavior, they have to look to see if there’s been data breaches, they have to look to see if there’s been internal threats.

There is a certain percentage, depending on who you talk to, 30%, 35%, 40% of data breaches happen from the inside out. So, these are internal employees who have access to something that they shouldn’t, and they email that out or they otherwise try to get that data out of the network. Of course, there’s the external threats from bad actors, those malicious types that are probing, probing, probing trying to find holes to get in and do whatever, the nefarious things that they’re trying to do.

So, being able to have some insight into the nature of how those systems, all of your systems communicate with each other and how they have communicated is critical. It’s really about being able to go from the blind area into a much more aware and certain area, right? So, do you really have… and thinking about, do you really have visibility in terms of what’s going on inside your network, because if you don’t, that can certainly hurt you.

The way we look at it, there’s the very basic things that virtually everybody has. Everybody has a firewall, most people have virus protection on their desktops. That sort of blocking and tackling, very basic prevention at the edge of a network is only a piece, right? It is not the most effective place anymore. You have to have it, we certainly wouldn’t tell you not to have it, but if you really want to move to a defense in depth, then it’s more than just trying to put up a blocking of things coming in. It’s being able to look at the live traffic and see what’s happening and identify if there are threats going on that got through. If something gets through the defenses that you have, how can you then further identify that it has happened and what’s going on? If you just think, “Well, I’ve got this firewall and I got my rules setup and I’m good, nothing can ever touch me,” and don’t look any further, then you’re really setting yourself up for a failure.

So, the way we approach the problem as a piece of this overall security landscape, is through the use of NetFlow information. So, NetFlow’s been around for a long time, it’s a quite a mature technology. But the great thing about it is, it’s continually even further maturing as we go on. What used to be sort of a traffic accounting product only, that was based on data coming from core routers and switches, has now been extended out to other systems in the network. Things like wireless LAN controllers, cloud servers, firewalls themselves. You can get the data from taps and probes that collect passively information about data traffic, and then turn that into a NetFlow export that can be sent to us that we can read.

Virtually every vendor… certainly every major vendor out there supports Flow in some way … Cisco of course is NetFlow and we use the term NetFlow to generically mean all of the various Flow types out there.  Jflow from Juniper, anything that’s IPFIX compatible as the standard, and some of the other kind of specialized versions of Flow, if you will. But all of them have the common theme that they’re going to look at that traffic and they’re going to be able to send that metadata to a collector like ours and then we can use that information intelligently to help both give you and allow you to report on and look deeply into the data, but also, and what we’re going to be talking about today, is really using that intelligence that’s built into the product to be able to identify threats, look at anomalies. Not just show you who your top talkers were, but actually say, “Hey, look. We’ve identified people that are communicating to known bad actors out there,” or, “We’ve seen an unusual bit of behavior in traffic between here and there, and this is something that really needs to be investigated.”

Talking about more of the specifics about how we do that. There’s two major pieces we’re going to be focusing on today. The first one is Anomaly Detection. Anomaly Detection for us means that we can baseline your network and the traffic on your network across a number of different dimensions. There’s actually quite a few metrics that we’re watching, some of the ones you could see below like flows, and packets, and bytes, and bits per second, packet size, it can be flags, it can be counts it can be all kinds of different metrics, and we can baseline each of them over time, across all of your interfaces or potentially even other aspects. So, it could be a specific conversation or a specific application, but at its most basic level through all of your interfaces to understand what is normal and what is normal activity for that time of day, that day of the week from those devices or whatever it may be.

Then of course, once we know what is normal, we can detect any activity that deviates from that normal baseline, right? This gives you a really great way of watching traffic 24/7 for things that you wouldn’t potentially pick up if you were just you know kind of eyeballing it if you will, or waiting certainly for someone to contact you and say there’s a problem. So, the statistical power of an application to be doing this behind the scenes and running all the time, and noticing things that you wouldn’t notice in the middle of the night, is incredibly useful for this sort of thing and then when we do detect an anomaly, we move into phase two as we call it, into diagnostics? So, diagnostics says, “Okay, there’s been some anomaly that has been detected, let’s look at this. Let’s figure out what’s going on here. We then kick off this diagnostic approach, which qualifies the cause and impact for each offending behavior breach. We’re looking it for KPIs that are specific to things like DOS attacks or scanners or sweepers or peer-to-peer activity. We roll all of that information up into a single ticket so to speak, for you on a screen that you can very easily look at and understand exactly what’s going on. When did it happen? Where did it happen? What was involved? What baseline was breached? What does that mean? What could that possibly be?

You can also do of course advance things like intelligent whitelisting. You can send the information out of our system up to another system that you may have, like an ITSM or trouble ticket system, via SNMP and via email and so forth. So, really this again this is the intelligent piece of the product with machine learning as its background. So it’s doing this whether you’re watching it or not. It’s looking for those baseline breaches and then when we see them, it’s really coordinating all of the information about what happened into a single easy-to-use place, which you can then drill down into using all of our standard features to try and identify other things that are happening or where do you need to go next.

Anomaly Detection or NBAD as you may hear us talk about it, has been in the product for a number of years now. So, that’s not something new, it’s continually being improved, and it’s a wonderful piece of the product, and it’s been there for a while.

The new thing that we have introduced and are introducing is what we call our Endpoint Threat Detection. So this is another module added onto the product that adds additional security capabilities while still utilizing all of the things that you typically utilize. So we’re still taking the data from NetFlow information but now we are applying to that information other outside data sources that we have, basically using some big data threat feeds collated from multiple sources that you can match up to or coordinate with the information about your traffic.

So, I’ve got information about my traffic, I’ve had that. Now, I’ve got information about what is bad in the world and in real time, where known bad actors, known bad IP addresses, Ransomware, malware, DDoS attacks, Tor and so forth are coming from and then looking at the two of them and saying, “Are any of my people talking to those things?” At the very most basic level that’s what we’re looking for, right? So, it’s things global in terms of getting all of these feeds and using pattern matching, and Anomaly Detection and so forth, and then it’s acting very local against the traffic that you have in your network.

This capability of having network connection logging or NetFlow, just as everybody in the industry agrees, is one of the best places that you can get this data. It’s almost impossible to get the kind of granular level of information from any other source. Especially if you are held to any sort of standard in terms of retention or policies around not being able to look directly into the data. If you’ve got compliance requirements that say, “Hey, I can’t store my customers’ data.” That is fine with NetFlow because NetFlow is not looking inside the packets; it’s looking at the metadata. Who’s talking to whom, and when are they doing it, and how much talking are they doing and so forth. But it’s not actually reading an e-mail or anything inside of that. So, you’re not going to run into a foul of any of those regulatory problems, but you’re still able to get a huge amount of benefit from a network investigation using that data.

It’s important that even without content, NetFlow provides an excellent means of guiding that investigation because there’s still so much data there. As it’s called in our world, metadata – Data about the data! There’s still so much information there. But what’s great also is that, you don’t have to retain content… unlike let’s say a probe or other type of system that is collecting every bit and byte. You run into problems there too, they’re expensive, and you run into storage requirements trying to store historically every conversation including the data, over a long period of time is just incredibly expensive and incredibly unwieldy to do. The amount of storage you have to have to be able to do that, and the difficulty in quickly and effectively retrieving that information and searching for things, just becomes next to impossible. But when you can still get the same benefit of what you need to look at from a security standpoint without those complications of price and just the logistics of handling it all, you end up with having a really valuable product and that’s what NetFlow can give to you.

So, with our Endpoint threat Detection, I’ve got a few screens here that can really dive down into what it looks like and how it works. Again, we’ve got these big data feeds of threat information out there in the world, collected from various sources, and honeypots and so forth and we’re continuously then monitoring for communications with those IPs of poor reputation. So, you’ve got your communication that we can see because of NetFlow, and you’ve got these known bad actors out there that we know about. We can match up those two pieces of information and when we do it, we’re not just saying it happened, but we’re giving you much more detail about it happening. So, if we kind of zoom in here a little bit, threat data can be seen in summary or in detail. We’ve got a categorization of what’s happening and different threat types. So, I can see this is a peer-to-peer kind of thing, is this known malware, is it Tor, is it an FTP or an SSH attacker? What kind of thing is happening from or on these known bad IP address?

So, from a high of macro level you can see what the threat categories are and what the threat types are and then of course, you can drill down using the standard NetFlow Auditor tools to investigate them and provide complete visibility into that threat. So, now I’ve seen it, I have traffic that’s been identified as a threat. I can use our drill down, right-click, or however you want to do it capability. In this case we’re showing a right-click on threat detection and saying show me the affected IP addresses. I want to know, let’s drill down and see in this case on Ransomware, command and control Ransomware what the infected IP addresses are and then you’re going to get into the individual affected IPs, the threat IP where it’s coming from and, how much traffic was done?

These are Ransomware-type attacks, and I can see this is happening in my network at this period of time and I can even then of course change the view to be a time view. When did this start? Has this been a long-lived thing that’s been going on over a period of time where it’s been sucking information out of my organization, or did this pop off and go away? And if it did, when did that happen? All of that kind of deep level investigation is something that you can get using all of the normal tools that we have. You can get this deep dive investigation of traffic for regular traffic. Not just malicious traffic, but just using our tool for what I’ll call normal traffic accounting. Who is talking to who and when, is all available to you and more now with the threat detection features.

So, we’re watching for those threats, we’ve identified them and then using all of the common things that you’re used to using if you’re already a customer of ours, being able to identify or drill down into that data and provide those reports when you want to see it.

Here’s another example: let’s look at threat-port usage over the last few hours. So, it’s may be a couple hour time frame and I can see specifically which ports, which protocols have been detected as potential threats. What kind of threats, of course again how much traffic did they use? How long has this gone on for, and so forth. So, you can in fact in this case, know that increasing Tor usage. That we’ve highlighted in yellow and green … but you can also notice it’s been this continual botnet chatter, this red line. It’s just been going on and on forever, and that’s obviously something that needs to be absolutely looked into. It might be very difficult to find this in any other way, it’s just ongoing background chatter that’s been happening. It may not spike to anything that’s incredibly large that would set off a threshold alert, or maybe not even set off an anomaly alert. But, we’ve identified this is being definitely an issue because it’s communicating to something that we know is bad out there.

Of course you have all of the common reporting type tools. So, you can automate those threats, I want a threat report every hour emailed to me, or every day, or whatever makes sense or a roll up report every month to provide to management to say, okay, over the last 30 days, here are all the threats that were identified as happening in our network, and then here’s what’s been remediated, here’s what we’ve blocked, here’s what we’ve stopped, here’s what we’ve fixed, here’s what we’ve cleaned up kind of thing and all of those reports that look good and can be scheduled in a great for both live use and for management, are part of and parcel of the product that we’ve been delivering for over a decade now.

As well as those deep dive threats forensics. So the high level reports are good for some people but the deep dive of course reports are important for other people and that’s something that we can give you because we store an archive all of this flow information, it’s not just the top 100, or the top 500, it’s the top 5,000 or 10,000 or every single Flow using our compliance version. The compliance version store has the ability to store all of those flows all the time for you to pull up and review may not have been yesterday, it may have been last week or last month or six months ago or whenever. You can still drill in, you can still see every individual flow in terms of IPs, source and destination and ports and protocols interfaces and all of that kind of information. It gives you that super granular capability that you’re just not going to find anywhere else.

We also try to give you different viewpoints; we’re very big on flexibility in terms of giving you an easy-to-understand way of looking at the traffic. Some people like to view numbers and other people like to view pictures, and there’s lots of ways that we can show that data to you. The visualization capability is outstanding within our product and one of the ways that that can be really useful. We’ve got this example here of a Tor correlation attack. So, it’s de-anonymizing Tor is a difficult but super important issue within the world of identifying Tor, and so for us, when we see that there has been Tor traffic we can build this visualization and we can see all the different places that that Tor traffic has hopped to within your network or in and out of your network and that really gives you a way to get in and say, “Okay, I need to look here, I need to stop at here, I need to stop at there.” From a service provider perspective, this can be a really, really useful example of what we can do in the power of our product.

So with the last few minutes here, I know we’re getting close to the time frame, but we do want to talk about the many options you have in terms of our scalable architecture. Whether you are small or mid-size organization, or very, very large organization, we have a way of delivering our product to you. It could be in a single standalone environment with a single database and single software installation, it could be as you grow and maybe you have various components of traffic that are disseminated globally, and you need local collection, we can do that. So, we can offer split off collectors or helper collectors that communicate up to a single master database or we can even do multi-site server, multi-database hierarchical architecture for really, really massively scaled organizations. So, no matter who you are, if you’re listening to this, if you’re just small organization with one site and a few devices, or a massively global corporation with thousands of devices and data traversing it in many different areas, we can fit your organization and we can architect a solution that is right for you.

We’ve got a number of exciting features one of the great things about us is that, we never stop developing and we never stop investigating what the best things are to add to the product. We’ve got some really cool enhancements coming on, all things that people have asked about or have inquired about, or we’ve decided to build on our own and we love talking to our customers.

Our best source of future development is request from our customers. So, anything that you can think of I can’t guarantee that that our team will do it, but I can certainly guarantee you that we’ll listen to you and we’ll think about it and we’ll do our absolute best to solve whatever issue you may have and because of our commitment to our customers and our willingness to listen to them, we really have built up a wonderful group of customers. You can see a few of their logos on the screen here again, everything from traditional organizations enterprises to service providers, educational institutions, Telco’s, whatever it may be, we can handle it and we’d love if you’re not already a customer of ours, but you’re listening to this webinar, certainly we’d love to have your logo on this list in the future and we feel like once you get to working with us and really get used to our product, you’re going to be super thrilled about how we do things. What we offer to you and the support we provide to you.

So, with that I think we’re at the end of the presentation, almost exactly right on time here, about 30 minutes. So, I want to thank everyone for taking the time to join today, as always it does not look like we have… I’m just looking. Does not look like we have any questions right now, so, if you do have any now would be the time to type them in. But if not, we just want to thank you for joining us today. This presentation has been recorded and will be available to any of the folks who registered, and it’ll eventually make it up into the website. So, please check it out. Also please check out our website for other information about future webinars or other documentation that we have, there’s a lot of good resources up there and we invite you to take a look at those and certainly if you have any questions to reach out to us either to the sales team or the support or engineering team depending on what you’re interested in.

So, with that, I’ll end the session and I look forward to speaking with all of you at some point in the future.

Thanks.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Benefits of a NetFlow Performance Deployment in Complex Environments

Since no two environments are identical and no network remains stagnant in Network Monitoring today, the only thing we can expect is the unexpected!

The network has become a living dynamic and complex environment that requires a flexible approach to monitor and analyze. Network and Security teams are under pressure to go beyond simple monitoring techniques to quickly identify the root causes of issues, de-risk hidden threats and to monitor network-connected things.

A solution’s flexibility refers to not only its interface but also the overall design.

From a user interface perspective, flexibility refers to the ability to perform analysis on any combination of data fields with multiple options to view, sort, cut and count the analysis.

From a deployment perspective, flexibility means options for deployment on Linux or Windows environments and the ability to digest all traffic or scale collection with tuning techniques that don’t fully obfuscate the data.

Acquiring flexible tools are a superb investment as they enrich and facilitate local knowledge retention. They enable multiple network centric teams to benefit from a shared toolset and the business begins to leverage the power of big data analytics that, over time, grows and extends beyond the tool’s original requirements as new information becomes visible.

What makes a Network Management System (NMS) truly scalable is its ability to analyze all the far reaches of the enterprise using a single interface with all layers of complexity to the data abstracted.

NetFlow, sFlow, IPFIX and their variants are all about abstracting routers, switches, firewalls or taps from multiple vendors into a single searchable network intelligence.

It is critical to ensure that abstraction layers are independently scalable to enable efficient collection and be sufficiently flexible to enable multiple deployment architectures to provide low-impact, cost-effective solutions that are simple to deploy and manage.

To simplify deployment and management it has to work out the box and be self-configuring and self-healing. Many flow monitoring systems require a lot of time to configure or maintain making them expensive to deploy and hard to use.

A flow-based NMS needs to meet various alerting, analytics, and architectural deployment demands. It needs to adapt to rapid change, pressure on enterprise infrastructure and possess the agility needed to adapt at short notice.

Agility in provisioning services, rectifying issues, customizing and delivering alerts and reports and facilitating template creation, early threat detection and effective risk mitigation, all assist in propelling the business forward and are the hallmarks of a flexible network management methodology.

Here are some examples that require a flexible approach to network monitoring:

  • DDoS attack behavior changes randomly
  • Analyze Interface usage by Device by Datacenter by Region
  • A new unknown social networking application suddenly becomes popular
  • Compliance drives need to discover Insider threats and data leakages occurring under the radar
  • Companies grow and move offices and functions
  • Laws change requiring data retention suitable for legal compliance
  • New processes create new unplanned pressures
  • New applications cause unexpected data surges
  • A vetted application creates unanticipated denials of service
  • Systems and services become infected with new kinds of malicious agents
  • Virtualization demands abruptly increase
  • Services and resources require a bit tax or 95th percentile billing model
  • Analyzing flexible NetFlow fields supported by different device vendors such as IPv6, MPLS, MAC, BGP, VPN, NAT paths, DNS, URL, Latency etc.
  • Internet of Things (IoT) become part of the network ecosystem and require ongoing visibility to manage

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

5 Ways Flow Based Network Monitoring Solutions Need to Scale

A common gripe for Network Engineers is that their current network monitoring solution doesn’t provide the depth of information needed to quickly ascertain the true cause of a network issue.

Many already have over-complicated their monitoring systems and methodologies by continuously extending their capabilities with a plethora of add-ons, or relying on disparate systems that often don’t interface very well with each other.

There is also an often mistaken belief that the network monitoring solutions that they have invested in will suddenly give them the depth they need to have the required visibility to manage complex networks.

A best-value approach to network monitoring is to use a flow-based analytics methodology such as NetFlow, sFlow or IPFIX.

In this market, it’s common for the industry to express a flow software’s scaling capability in flows-per-second.

Using Flows-per-second as a guide to scalability is misleading as it is often used to hide a flow collector’s inability to archive flow data by overstating its collection capability.

It’s important to look not just at flows-per-second, but at the granularity retained through per minute (flow retention rate), the speed and flexibility of alerting, reporting, forensic depth and diagnostics and the scalability when impacted by high-flow-variance, sudden-bursts, number of devices and interfaces, the speed of reporting over time, the ability to retain short-term and historical collections and the confluence of these factors as it pertains to scalability of the software as a whole.

A Flow Based network monitoring software needs to scale in its collection of data in five ways:

  1. Ingestion capability – means the amount of flows that can be consumed by a single collector.
  1. Digestion capability – means the amount of flow records that can be retained by a single collector.
    Flow retention rates are particularly critical to quantify as they dictate the level of granularity that allows a flow-based NMS to deliver the visibility required to achieve quality Anomaly Detection, Network Forensics, Root Cause Analysis, Billing Substantiation, Peering Analysis and Data Retention compliance.
  1. Threading capacity – pertains to the multitasking strength of a solution to spread the load of collection across multiple CPU’s on a single server.
  1. Distributed collection – refers to the ability of a flow-based solution to run a single data warehouse that takes its input from a cluster of collectors as a single unit as a means to load balance.
  1. Hierarchical correlation – is designed to enable parallel analytics across distributed data warehouses to aggregate their results.

The Strategic Value of Advanced Netflow for Enterprise Network Security

With thousands of devices going online for the first time each minute, and the data influx continuing unabated, it’s fair to say that we’re in the throes of an always-on culture.

As the network becomes arguably the most valuable asset of the 21st century business, IT departments will be looked at to provide not just operational functions, but, more importantly, strategic value.

Today’s network infrastructures contain hundreds of key business devices across a complex array of data centers, virtualized environments and services. This means Performance and Security Specialists are demanding far more visibility from their monitoring systems than they did only a few years ago.

The growing complexity of modern IT infrastructure is the major challenge faced by existing network monitoring (NMS) and security tools.

Expanding networks, dynamic enterprise boundaries, network virtualization, new applications and processes, growing compliance and regulatory mandates along with rising levels of sophistication in cyber-crime, malware and data breaches, are some of the major factors necessitating more granular and robust monitoring solutions.

Insight-based and data-driven monitoring systems must provide the deep visibility and early warning detection needed by Network Operations Centre (NOC) teams and Security professionals to manage networks today and to keep the organization safe.

For over two decades now, NetFlow has been a trusted technology which provides the data needed to enable the performance management of medium to large environments.

Over the years, NetFlow analysis technology has evolved alongside the networks it helps optimize to provide information-rich analyses, detailed reporting and data-driven network management insights to IT departments.

From traffic accounting, to performance management and security forensics, NetFlow brings together both high-level and detailed insights by aggregating network data and exporting it to a flow collector for analysis. Using a push-model makes NetFlow less resource-intensive than other proprietary solutions as it places very little demand on network devices for the collection and analysis of data.

NetFlow gives NOCs the information they need for pervasive deep network visibility and flexible analytics, which substantially reduces management complexity. Performance and Security Specialists enjoy unmatched flexibility and scalability in their endeavors to keep systems safe, secure, reliable and performing at their peak.

Although the NetFlow protocol promises a great deal of detail that could be leveraged to the benefit of the NOC and Security teams, many NetFlow solutions to date have failed to provide the contextual depth and flexibility required to keep up with the evolving network and related systems. Many flow solutions simply cannot scale to archive the necessary amount of granular network traffic needed to gain the visibility required today. Due to the limited amount of usable data they can physically retain, these flow solutions are used for only basic performance traffic analysis or top talker detection and cannot physically scale to report on needed analytics making them only marginally more useful than an SNMP/RMON solution.

The newest generation of NetFlow tools must combine the granular capability of a real-time forensics engine with long-term capacity planning and data mining abilities.

Modern NetFlow applications should also be able to process the ever expanding vendor specific flexible NetFlow templates which can provide unique data points not found in any other technology.

Lastly, the system needs to offer machine-learning intelligent analysis which can detect and alert on security events happening in the network before the threat gets to the point that a human would notice what has happened.

When all of the above capabilities are available and put into production, a NetFlow system become an irreplaceable application in an IT department’s performance and security toolbox.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

How NetFlow Solves for Mandatory Data Retention Compliance

Compliance in IT is not new and laws regulating how organizations should manage their customer data exist such as: HIPPA, PCI, SCADA and Network transaction logging has begun to be required of business. Insurance companies are gearing up to qualify businesses by the information they retain to protect their services and customer information. Government and industry regulations and enforcement are becoming increasingly stringent.

Most recently many countries have begun to implement Mandatory Data Retention laws for telecom service providers.

Governments require a mandatory data retention scheme because more and more crime is moving online from the physical world and ISP‘s are keeping less data and retaining it for a shorter time. This negatively impacts the investigative capabilities of law enforcement and security agencies that need timely information to help save lives by early spotting lone-wolf terrorists or protect vulnerable members of society from abuse by sexual deviants, ransomware or other crimes online.

Although there is no doubt as to the value of mandatory data retention schemes they are not without justifiable privacy, human rights and expense concerns to implement.

It takes a lot of cash, time and skills that many ISP’s and companies simply cannot afford. Internet and managed service providers and large organizations must take proper precautions to remain in compliance. Heavy fines, license and certification issues and other penalties can result from non-compliance with mandatory data retention requirements.

According to the Australian Attorney-General’s Department, Australian telecommunications companies must keep a limited set of metadata for two years. Metadata is information about a communication (the who, when, where and how)—not the content or substance of a communication (the what).

A commentator from the Sydney morning herald qualified“…Security, intelligence and law enforcement access to metadata which overrides personal privacy is now in contention worldwide…” and speculated that with the introduction of Australian metadata laws that “…this country’s entire communications industry will be turned into a surveillance and monitoring arm of at least 21 agencies of executive government. …”.

In Australia many smaller ISP’s are fearful that failing to comply will send them out of business. Internet Australia’s Laurie Patton said, “It’s such a complicated and fundamentally flawed piece of legislation that there are hundreds of ISPs out there that are still struggling to understand what they’ve got to do”.

As for the anticipated costs, a survey sent to ISPs by telecommunications industry lobby group Communications Alliance found that  “There is a huge variance in estimates for the cost to business of implementing data retention – 58 per cent of ISPs say it will cost between $10,000 and $250,000; 24 per cent estimate it will cost over $250,000; 12 per cent think it will cost over $1,000,000; some estimates go as high as $10 million.”

An important cost to consider in compliance is the ease of data reporting when requested by government or corporate compliance teams to produce information for a specific ipv4 or ipv6 address. If the data is stored in a data-warehouse that is difficult to filter this may cause the service provider to incur penalties or be seen to be non-complying. Flexible filtering and automated reporting is therefore critical to produce the forensics required for the compliance in a timely and cost effective manner.

Although there are different laws governing different countries the main requirement of mandatory data retention laws at ISP’s is to maintain sufficient information at a granular level in order to assist governments in finding bad actors such as terrorists, corporate espionage, ransom-ware and pedophiles. In some countries this means that telcos are required to keep data of the IP addresses users connect to, for up to 10 weeks and in others just the totals of subscriber usage for each IP used for up to 2 years.

Although information remains local to each country and governed by relevant privacy laws, the benefits to law enforcement in the future will eventually provide the ability to have the visibility to track relayed data such as communications used by Tor Browsers, Onion routers and Freenet beyond their relay and exit nodes.

There is no doubt in my mind that with heightened states of security and increasing online crime there is a global need for governments to intervene with online surveillance to protect children from exploitation, reduce terrorism and to build defensible infrastructures whilst at the same time implementing data retention systems that have the inbuilt smarts to enable a balance between compliance and privacy rather than just a blanket catch all. There is already an available solution for the Internet communications component based on Netflow that assists ISP’s to quickly comply at a low cost whilst properly allowing data retention rules to be implemented to limit intruding on an individual’s privacy.

NetFlow solutions are cheap to deploy and are not required to be deployed at every interface such as a packet analyzer and can use the existing router, switch or firewall investment to provide continuous network monitoring across the enterprise, providing the service provider or organization with powerful tools for data retention compliance.

NetFlow technology if sufficiently scalable, granular and flexible can deliver on the visibility, accountability and measurability required for data retention because it can include features that:

  • Supply a real-time look at network and host-based activities down to the individual user and device;
  • Increase user accountability for introducing security risks that impact the entire network;
  • Track, measure and prioritize network risks to reduce Mean Time to Know (MTTK) and Mean Time to Repair or Resolve (MTTR);
  • Deliver the data IT staff needs to engage in in-depth forensic analysis related to security events and official requests;
  • Seamlessly extend network and security monitoring to virtual environments;
  • Assist IT departments in maintaining network up-time and performance, including mission critical applications and software necessary to business process integrity;
  • Assess and enhance the efficacy of traditional security controls already in place, including firewalls and intrusion detection systems;
  • Capture and archive flows for complete data retention compliance

Compared to other analysis solutions, NetFlow can fill in the gaps where other technologies cannot deliver. A well-architected NetFlow solution can provide a comprehensive landscape of tools to help business and service providers to achieve and maintain data retention compliance for a wide range of government and industry regulations.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

NetFlow for Usage-Based Billing and Peering Analysis

Usagebased billing refers to the methods of calculating and passing back the costs of running a network to the consumers of data that occur through the network. Both Internet Service Providers (ISP) and Corporations have a need for Usage-based billing with different billing models.

NetFlow is the ideal technology for usage-based billing because it allows for the capture of all transactional information pertaining to the usage and some smart NetFlow technologies already exist to assist in the counting, allocation and substantiation of data usage.

Advances in telecommunication technology have enabled ISPs to offer more convenient, streamlined billing options to customers based on bandwidth usage.

One billing model used most commonly by ISPs in the USA is known as 95th percentile. The ISP filters the samples and disregards the highest 5% in order to establish the bill amount. This is an advantage to data consumers who have bursts of traffic because they’re not financially penalized for exceeding a traffic threshold for brief periods of time. The solution measures traffic employing a five-minute granularity standard typically over the course of a month.

The disadvantage of 95th percentile model is that its not sustainable business model as data continues to become a utility like electricity.

A second approach is a utility-based metered billing model that involves retaining a tally of all bytes consumed by a customer with some knowledge of data path to allow for premium or free traffic plans.

Metered Internet usage is used in countries like Australia and most recently Canada who have nationally moved away from a 95th percentile model. This approach is also very popular in corporations whose business units share common network infrastructure and who are unwilling to accept “per user” cost, but rather a real consumption based cost.

Benefits of usage-based billing are:

  • Improved transparency about cost of services;
  • Costs feed back to the originator;
  • Raised cost sensitivity;
  • Good basis for active cost management;
  • Basis for Internal and external benchmarking;
  • Clear substantiation to increase bandwidth costs;
  • Shared infrastructure costs can also be based on consumption;
  • Network performance improvements.

For corporations usage-based billing enables the IT department to become a shared service and viewed as a profit center rather than a cost center. It can become viewed as something that’s a benefit and a catalyst for business growth rather than a necessary but expensive line item in the budget.

For ISPs in the USA there is no doubt that a utility-based costs per byte model will continually be contentious as video and TV over the Internet usage increases. In other regions new business models that include packaging of video over “free zones” services have become popular meaning that the cost of premium content provision has fallen onto the content provider making utility billing viable in the USA.

NetFlow tools can include methods for building billing reports and offer variety of usage-based billing model calculations.

Some NetFlow tools even include an API to allow the chart-of-accounts to be retained and driven from traditional accounting systems using the NetFlow system to focus on the tallying. Grouping algorithms should be flexible within the solution to allow for grouping of all different variables such as interfaces, applications, Quality of Service (QoS), MAC Addresses, MPLS and IP groups. For ISPs and large corporations Asynchronous Network Numbers (ASN) also allow for analysis of data-paths allowing sensible negotiations with Peering partners and Content partners.

Look out for more discussion on peering in an upcoming blog…

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

How to Improve Cyber Security with Advanced Netflow Network Forensics

Most organizations today deploy network security tools that are built to perform limited prevention – traditionally “blocking and tackling” at the edge of a network using a firewall or by installing security software on every system.

This is only one third of a security solution, and has become the least effective measure.

The growing complexity of the IT infrastructure is the major challenge faced by existing network security tools. The major forces impacting current network security tools are the rising level of sophistication of cybercrimes, growing compliance and regulatory mandates, expanding virtualization of servers and the constant need for visibility compounded by ever-increasing data volumes. Larger networks involve enormous amounts of data, into which the incident teams must have a high degree of visibility for analysis and reporting purposes.

An organization’s network and security teams are faced with increasing complexities, including network convergence, increased data and flow volumes, intensifying security threats, government compliance issues, rising costs and network performance demands.

With network visibility and traceability also top priorities, companies must look to security network forensics to gain insight and uncover issues. The speed with which an organization can identify, diagnose, analyze, and respond to an incident will limit the damage and lower the cost of recovery.

Analysts are better positioned to mitigate risk to the network and its data through security focused network forensics applied at the granular level. Only with sufficient granularity and historic visibility and tools that are able to machine learn from the network Big Data can the risk of an anomaly be properly diagnosed and mitigated.

Doing so helps staff identify breaches that occur in real-time, as well as Insider threats and data leaks that take place over a prolonged period. Insider threats are one of the most difficult to detect and are missed by most security tools.

Many network and security professionals assume that they can simply analyze data captured using their standard security devices like firewalls and intrusion detection systems, however they quickly discover limitations as these devices are not designed for and cannot record and report on every transaction due to lack of deep visibility, scalability and historic data retention making old fashioned network forensic reporting expensive and impractical.

NetFlow is an analytics software technology that enables IT departments to accurately audit network data and host-level activity. It enhances network security and performance making it easy to identify suspicious user behaviors to protect your entire infrastructure.

A well-designed NetFlow forensic tool should include powerful features that can allow for:

  • Micro-level data recording to assist in identification of real-time breaches and data leaks;
  • Event notifications and alerts for network administrators when irregular traffic movements are detected;
  • Tools that highlight trends and baselines, so IT staff can provision services accordingly;
  • Tools that learn normal behavior, so Network Security staff can quickly detect and mitigate threats;
  • Capture highly granular traffic over time to enable deep visibility across the entire network infrastructure;
  • 24-7 automation, flexible reporting processes to deliver usable business intelligence and security forensics specifically for those analytics that can take a long time to produce.

Forensic analysts require both high-level and detailed visibility through aggregating, division and drilldown algorithms such as:

  • Deviation / Outlier analysis
  • Bi-directional analysis
  • Cross section analysis
  • Top X/Y analysis
  • Dissemination analysis
  • Custom Group analysis
  • Baselining analysis
  • Percentile analysis
  • QoS analysis
  • Packet Size analysis
  • Count analysis
  • Latency and RTT analysis

Further when integrated with a visual analytics process it will enable additional insights to the forensic professional when analyzing subsets of the flow data surrounding an event.

In some ways it needs to act as a log analyzer, security information and event management (SIEM) and a network behavior anomaly and threat detector all rolled into one.

The ultimate goal is to deploy a multi-faceted flow-analytics solution that can compliment your business by providing extreme visibility and eliminating network blindspots, both in your physical infrastructure and in the cloud, automatically detecting and diagnosing your entire network for anomalous traffic and improving your mean time to detect and repair.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

5 Benefits of NetFlow Performance Monitoring

 

NetFlow can provide the visibility that reduces the risks to business continuity associated with poor performance, downtime and security threats. As organizations continue to rely more and more on computing power to run their business, transparency of business applications across the network and interactions with external systems and users is critical and NetFlow monitoring simplifies detection, diagnosis, and resolution of network issues.

Traditional SNMP tools are too limited in their collection capability and the data extracted is very coarse and does not provide sufficient visibility. Packet capture tools can extract high granularity from network traffic but are dedicated to a port mirror or a per segment capture and are built for specialized short term captures rather than long historical analyses as well as packet based analytics tools traditionally do not provide the ability to perform forensic analysis of various sub-sections or aggregated views and therefore also suffer from a lack of visibility and context.

By analyzing NetFlow data, a network engineer can discover the root cause of congestion and find out the users, applications and protocols that are causing interfaces to exceed utilization. They can determine the Type or Class of Service (QoS) and group with application mapping such as Network Based Application Recognition (NBAR) to identify the type of traffic such as when peer-to-peer traffic tries to hide itself by using web services and so on.

NetFlow allows granular and accurate traffic measurements as well as high-level aggregated traffic collection that can assist in identifying excessive bandwidth utilization or unexpected application traffic. NetFlow enables networks to perform IP traffic flow analyses without deploying external probes, making traffic analytics cheap to deploy even on large network environments.

Understanding the benefits of a properly managed network might be a cost that hasn’t been considered by business managers.  So what are the benefits of NetFlow performance monitoring?

Avoiding Downtime

Service outages can cause simple frustrations such as issues when saving opened files to a suddenly inaccessible server or cause delays to business process can impact revenue. Downtime that affects customers can result in negative customer experiences and lost revenue.

Network Speed

Slow network links cause severe user frustration that impacts productivity and causes delays.

Scalability

As the business grows, the network needs to grow with it to support more computing processes, employees and clients. As performance degrades, it can be easy to set thresholds that show when, where and why the network is exceeding it’s capability. By having historical performance of the network, it is very easy to see when or where it is being stretched too thin or overwhelmed and to easily substantiate upgrades.

Security

Security analytics is arguably the most important aspect of network management, even though it might not be thought of as a performance aspect. By monitoring NetFlow performance, it’s easy to see where the most resources are being used. Many security attacks drain resources, so if there are resource spikes in unusual areas it can point to a security flaw. With advanced NetFlow diagnostics software, these issues can be not only monitored, but also recorded and corrected.

Peering

NetFlow analytics allows organizations to embrace progressive IT trends to analyze peering paths, or virtualized paths such as Multiprotocol Label Switching (MPLS), Virtual Local Area Networks (VLAN), make use of IPv6, Next Hops, BGP, MAC Addresses or link internal IP’s to Network Address Translated (NAT) IP addresses.

MPLS creates new challenges when it comes to network performance monitoring and security with MPLS as communication can occur directly without passing through an Intrusion Detection System (IDS). Organizations have two options to address the issue: Implement a sensor or probe at each site, which is costly and fraught with management hassles; or turn on NetFlow at each MPLS site and send MPLS flows to a NetFlow collector that becomes a very cost-effective option because it can make use of existing network infrastructure.

With NetFlow, network performance issues are easily resolved because engineers are given in-depth visibility that enables troubleshooting – all the way down the network to a specific user. The solution starts by presenting the interfaces being used.

An IT engineer can click on the interfaces to determine the root cause of a usage spike, as well as the actual conversation and host pairs that are dominating bandwidth and the associated user information. Then, all it takes is a quick phone call to the end-user, instructing them to shut down resource-hogging activities, as they’re impairing network performance.

IT departments are tasked with responding to end-user complaints, which often relate to network sluggishness. However, many times an end-user’s experience has more to do with an application or server failing to respond within the expected time period that can point to a Latency, Quality of Service (QoS) or server issue. Traffic Latency can be analyzed if supported in the NetFlow version. Statistics such as Round Trip Time (RTT) and Server Response Time (SRT) can be diagnosed to identify whether a network problem is due to an application or service experiencing slow response times.

Baselines of RTT and SRT can be learned to determine whether response time is consistent or is spiking over a time period. By observing the Latency over time alarms can be generated when Latency increases above the normal threshold.

TCP Congestion flags can also serve as an early warning system where Latency is unavailable.

NetFlow application performance monitoring provides the context around an event or anomaly within a host to quickly identify the root cause of issues and improve Mean Time To Know (MTTK) and Mean Time To Repair (MTTR).

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

Deploying NetFlow as a Countermeasure to Threats like CNB

Few would debate legendary martial artist Chuck Norris’ ability to take out any opponent with a quick combination of lightning-fast punches and kicks. Norris, after all, is legendary for his showdowns with the best of fighters and being the last man standing in some of the most brutal and memorable fight scenes. It’s no surprise, then, that hackers named one of their most dubious botnet attacks after “tough guy” Norris, which wreaked havoc on internet routers worldwide. The “Chuck Norris” botnet, or CNB, was strategically designed to target poorly configured Linux MIPS systems, network devices such as routers, CCTV cameras, switches, Wifi modems, etc. In a study on CNB, the University of Masaryk in the Czech Republic, examined the attack’s inner workings and demonstrated how it employed Netflow as a countermeasure to actively detect and incapacitate the threat.

Lets look at what gave CNB its ability to infiltrate key networking assets and how, through flow-based monitoring, proactive detection made it possible to thwart the threat and others like it.

What made the Chuck Norris attack so potentially devastating?

What made the CNB attack so menacing was its ability to access all network traffic by infiltrating routers, switches and other networking hardware. This allowed it to go undetected for long periods, whereby it was capable of spreading through networks fairly quickly. As Botnet attacks “settle in”, they start issuing commands and take control of compromised devices, known as “bots”, that act as launch pads for Denial of Service (DoS) attacks, illegal SMTP relays, theft of information, etc.

Deploying Netflow as a countermeasure to threats like CNB

In the case of the CNB attack, Netflow collection data revealed how it infiltrated devices on TELNET and SSH ports, DNS Spoofs and web browser vulnerabilities, enabling Security teams to track its distribution on servers to avoid further propagation. Netflow’s deep visibility into network traffic gave Security teams the forensics they needed to effectively detect and incapacitate CNB.

Analysts are better positioned to mitigate risk to the network and its data through flow-based security forensics applied at the granular level coupled with dynamic behavioral and reputation feeds. Only with sufficient granularity and historic visibility can the risk of an anomaly be better diagnosed and mitigated. Doing so helps staff identify breaches that occur in real-time, as well as data leaks that take place over a prolonged period.

Flow-based monitoring solutions can collect vast amounts of security, performance and other data directly from networking infrastructure, giving Network Operations Centers (NOCs) a more comprehensive view of the environment and events as they occur. In addition, certain flow collectors are themselves resilient against cyber attacks such as DDoS. NetFlow technology isn’t only lightweight in terms of resource demands on switches and routers, but also highly fault-tolerant and limits exposure to flow floods including collection tuning, self-maintaining collection tuning rules and other self-healing capabilities.

As a trusted source of deep network insights built on big data analysis capabilities, Netflow provides NOCs with an end-to-end security and performance monitoring and management solution. For more information on Netflow as a performance and security solution for large-scale environments, download our free Guide to Understanding Netflow.

Cutting-edge and innovative technologies like NetFlow Auditor delivers the deep end-to-end network visibility and security context required assisting in speedily impeding harmful attacks.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility