How to prevent a Denial of service attack launched from the internal corporate LAN by rogue IT staff?
That could be a tall order! DOS is one of the hardest types of attacks to prevent, and that's when they come from the WAN side. Someone with access and know-how on the same broadcast domain could pound any open port until unresponsive, or probably take down the entire LAN.
But first, can you explain what the reasons are for thinking that DOS attacks are occurring? Is it just the video outages alone?
What do the logs of the NVR, PC's, switches say? A local DOS attack should leave plenty of evidence behind. Also, what's the motivation? Disgruntled IT guy?
You can put a firewall up and block every port except for 80, but no doubt they will be attacking on 80, so you'll need to restrict some other way. You could make a firewall rule based on ip, but they could spoof past it.
If I were you I would look for more compelling evidence of internal sabotage. Once you have that, then it may help identify the who and how. Turn on logging and realtime notification of network events.
If there is a staff IT person doing this, you really need to have them removed physically from the building ASAP, and have IT execute their sysadmin gone rogue contingency plan.
With knowing very little about this situation, I would look to see what else is on the network and see what other things are happening at that time. If it's for a defined time, it sounds like the type of thing that would be scheduled, such as a backup or software update. It could be that some management server is trying to update workstations and is getting confused along the way.
A DOS attack is possible, but given that it's in the middle of the night it seems like they're not really disrupting much, so what's the motivation? I would look to IT to see if they have any scheduled jobs around that time.
A denial of service attack is a real risk when attaching an NVR to the WAN (to facilitate remote access). It's one of the reasons I'm not in favor of that design/architecture.
That being said I agree with others here that it seems like a complicated conspiracy to conclude a DOS attack must be responsible for these outages without better evidence.
If there are no application logs from the NVR I'd look to Google for help with detecting a DOS attack in Windows 7.
You might also think about putting a network device between the system and the rest of the network not necessary to block/prevent the attacks but to log the activity.
A large scale, distributed denial of service attack happens across the internet and works by sending so many legitimate packets to the target that the networking equipment or host are saturated with traffic and cannot service any other requests. DDoS attacks use zombie bot-nets all over the world, and it's not possible to simply block an IP or IP range. So it's not that a firewall 'blocks' the attack, rather any mitigation would need to figure out how to filter the 'bad' traffic and allow only 'good' traffic. In the case of a professional attack, this is challenging because the network hardware needs to both handle a very large volume of traffic and know how to differentiate between the good and bad traffic. The current state of the art is to sign up with a DDoS mitigation service provider with a prearranged plan to transfer your service endpoint (via DNS) over to their network which has the capacity to absorb, apply filtering algoriths, and then forward legitimate traffic to your service for the duration of the attack. A lot of the companies that provide content distribution network services also have this capacity. A responsible service provider (i.e, any cloud service provider) needs to have a DDoS mitigation plan in place as part of their routine security assurances.
But it doesn't sound like you're talking about a DDoS attack from outside the internet onto your NVR but rather an internal attack, right? Thus there is no bot-net but rather you believe it's a rouge host or small cluster of hosts flooding the NVR's NIC with traffic and taking down its services that way, correct? If this is the case then it really should be possible to detect this traffic pattern and identify the culprit. The network just needs to be monitored to look for these attacks. And given that it should be easy to detect, thus the skepticism that this is the most likely cause of your outages..
Anything is possible! The likelihood of your video surveillance network experiencing a DoS is far-fetched. Like any other issues that occur on a network, you need to capture and analyze the packets to determine what is going on.
Some great points have been mentioned; however, not all DoS attacks are created equal. Some can be mitigated with the use of TCP SYN checking, Anti-IP Spoofing or too many connections from a single IP, while others such as DDoS attacks are very much harder to mitigate. If you are experiencing an attack, it is more than likely a DoS attack exhausting your layer 4 UDP resources; maybe even a UDP bandwidth volumetric attack causing 100% network saturation.
The problem with any attack on a local network, MAC and IP Spoofing can be used to hide the origin of the packets. Most video streams are UDP, It is highly unlikely that a few hundred botnets have been installed on the local network causing a Distributed Denial of Service attack; however, not impossible. As I said, you have to analyze what is happening on the network.
It is apparent that a video surveillance system was installed without any consideration of network security; always ensure that your video surveillance infrastructure (Dedicated or not) is secure. You need to segment the video surveillance traffic and any physical or logical connection made back to any other network needs to have some type of firewall.
We experienced something very similar maybe 5 years ago; around 1am every morning for 30-45 minutes, we lost video from all 32 cameras; not all cameras were coming back to the same IDF, six different IDFs were used. We were able to install Cacti, monitor all the switches and monitor all traffic from the port where the NVR was connected. Long story short; it was not some disgruntled employee causing some DoS (It did cross our minds); after using Cacti to monitor bandwidth graphs, we were able to observe a bandwidth spike (spiked to ~95Mbps) on the core switch (Where the NVR was connected to). Wireshark was utilized to capture the packets to determine the type and source of the traffic. It turned out to be a MS-SQL database backup. Every morning at 1am, a server on the other side of the building was used to dump a 30GB database file; on a 100Mbps network utilizing TCP and assuming typical overhead of TCP, that took around ~45 minutes. After hooking up a local USB drive to the server, the cameras never had any other issues.
Analyze the network and always secure your network!