Lewis Stuart demonstrated the creation and deployment of a simple botnet on a mock social-media site called "scroll hole" in a recent Computerphile video.
This demonstration highlights the ease with which automated accounts can be scaled to simulate organic engagement, a process that can distort public perception and influence digital discourse.
The exercise illustrates the technical mechanics behind botnets, which are networks of compromised or automated computers controlled by a single entity. By deploying these bots on a simulated platform, Stuart showed how automated scripts can generate a high volume of activity to create an illusion of popularity or consensus.
This technical vulnerability is reflected in current global internet trends. According to the Imperva Bad Bot Report 2025, more than half of all global internet traffic is now generated by bots [2]. This surge is partly attributed to the integration of artificial intelligence, which makes these automated agents harder to detect.
Not all bot traffic is malicious, but a significant portion is. Data from American Banker indicates that 51% of web traffic comes from bots [1]. Within that figure, 37% of traffic is categorized as "bad bots" [1]. These malicious bots often target high-value sectors, with banks serving as primary targets for such activity.
The "scroll hole" simulation serves as a microcosm for these larger trends. It shows that the barrier to entry for creating bot-driven influence operations is low, while the tools for detection struggle to keep pace with the scale of automation.
“More than half of all global internet traffic is now generated by bots.”
The convergence of AI-driven bot creation and the high volume of existing automated traffic suggests that digital authenticity is becoming harder to verify. As bot traffic exceeds 50% of global activity, the ability to distinguish between human opinion and algorithmic manipulation becomes a critical challenge for platform security and democratic discourse.





