It looks like the increasingly concerning issue of cybersecurity will garner more attention after Distil Networks recently published its 2017 Bad Bot Report
Distil Network is an anti-bot service that has been conducting research and publishing the findings on internet bots, automated computer programs that carry out various tasks, for the last four years.
The findings documented in this fourth annual report are worrying.
The highlight of the findings is that nearly all websites with login pages (96%) experienced a cyber attack from bots.
According to the report, it is challenging to detect a cyber attack facilitated by bots since it is difficult to tell bad bots from the good ones.
This is compounded by the fact that cyber attacks are becoming more advanced and sophisticated with each passing year.
The research was conducted to gain insight into the daily cyber attacks that go undetected and wreak havoc on unsuspecting websites.
The 2016 internet data was the source from Distil Networks’ global reach.
Bad bots accounted for 40% of all web traffic in 2016.
Large websites were affected the most last year.
21.83% of large website traffic originated from bad bots, an increase of 36.43% since 2015.
These internet programs also tell alternative facts in order to avoid detection.
75.9% of bad bots claimed to be popular browsers including Firefox, Chrome, Internet Explorer, and Safari.
The report also revealed the type of websites that are prime targets of bots.
These include websites with a login section, pricing information, propriety content, payment processing, and web forms (discussion forums, reviews, and contacts).
Bad bots hit 97% of sites with pricing information and/or propriety content, and affected 96% of sites with login pages.
In addition, these programs managed to bypass the login page in 90% of websites.
This data should be a cause of concern for web publishers and developers.
Today, bots are getting more sophisticated and have extensively proliferated websites with accounts.
The findings reveal that 75% of bad bots were Advanced Persistent Bots (APBs).
As such, they are very difficult to detect while they carry out a cyber attack.
This has created an internet landscape that is prone to cyber attack vulnerabilities.
According to CEO of Distil Networks, Rami Essaid, bots have access to confidential data for scraping once they are behind a login page.
This can enable attackers to orchestrate transaction fraud.
There are three key categories of bot manifestation: white hat, gray hat, and black hat.
White-hatted bots include search engine spiders.
They are relatively easy to detect.
Grey hatted bots include bots that extract data in a manner that it can be understood and reused (scraping).
They often mimic user activity thus making it difficult to differentiate them from good bots or genuine human users.
97% of websites were hit by scraping bots in 2016.
Black-hatted bots are the more malicious bots and can carry out a major cyber attack.
Such bots aim to illegally acquire information from websites, impersonate users, take over the sites and even crash the websites.
One fifth of all 2016 web traffic originated from these bots.
Another key finding of the Distil Network report is that a large percentage of bad bot traffic originated from data centers.
For bad bot traffic by ISP type, data centers accounted for 60%, residential ISPs accounted for 30.5% and mobile carriers accounted for 9.4 % of the traffic.
This shows that malicious actors have access to extensive resources through which they can launch a major cyber attack such as fraud, account takeovers, and outright data theft.
The gradual rise of globally distributed cloud networks has enabled malicious actors to develop and launch a cyber attack from bots.
It is important to note that bad bot traffic originating from mobile platforms is a lot less than data centers, with the figure marking a sharp increase from 2015.
Distil Networks has predicted that continued increase is eminent in the coming years.
Distil Network forwarded measures that website owners can put in place to prevent a cyber attack by bad bots.
These include website geo-fencing, implementing a whitelist policy imposing age limits on browser versions, and setting up bad bot filters.