Digital deception: How bad actors and bots are infiltrating our surveys and influencing data
Bonnamy
We’ve all filled out an online survey at some point. Whether it’s after catching a ride home after a night out or after some online shopping – everyone these days wants a five-star review.
But not everyone who completes an online survey is who they say they are.
Some of the people who complete online surveys do not have the best intentions. They might fill them out with non-genuine answers – or not even be human.
These “bad actors” usually complete an online survey to take advantage of the financial incentives on offer, or to purposefully distort the survey responses to amplify their own point of view.
A bad actor can complete an online survey themselves, or program software to complete surveys simultaneously. In the age of artificial intelligence, this problem is only increasing as it gets easier.
Survey data is used to inform the interpretations and actions within a field – including in healthcare. Good-quality data is crucial in healthcare for:
-
accurate diagnoses
-
effective treatments
-
improved patient safety
-
better decision-making
-
efficient care.
In nursing and midwifery, good surveys are very important. For example, last year, in China, more than 400 paediatric nurses were surveyed by the Children’s Hospital of Nanjing Medical University to determine their research abilities. The finding was that their research abilities could be improved, so the researchers made various recommendations to improve the abilities of the nurses who look after children in hospitals, based on their educational qualifications and professional skills.
But with bad actors in the mix, the potential for data that is incorrect, flawed or misrepresented increases. This means the potential for poor decisions about the data are also likely.
Our research team, led by research fellow and PhD candidate James Bonnamy, from Monash’s Nursing and Midwifery, were able to quickly identify interference by bad actors in an online survey that was designed to collect responses only from health professions educators.
Within a few days of opening the online survey we had received hundreds of responses. But excitement turned to dismay when we realised there was something unusual, because many of the responses were identical.
Some were nonsensical, such as a podiatrist teaching nursing students about a topic outside of podiatrists’ and nurses’ scope of practice. Many responses were completed with very detailed paragraph-length responses, and all within seconds.
It was obvious to us that bots had been used to complete our online survey in an attempt to take advantage of the financial incentive that was on offer. Knowing the potential impact of such data, we immediately started exploring methods for addressing this growing and unfortunate widespread challenge.
What are bots?
Bots is short for “robots”. They’re software programs designed to perform repetitive tasks. They’re automated and, once programmed, they can run without human oversight. Bots have the capability of completing multiple surveys simultaneously in record time.
Not all bots are bad – some bots are designed to help humans, like those that drive online virtual chat support. On the other hand, most of us are familiar with malicious bots that can collect email addresses to send spam emails requesting $10,000 for a Prince in dire need.
Why do bots pose a threat to research?
The way healthcare is delivered, public infrastructure is designed, and how our children are educated all depend on high-quality research. Survey responses that are generated by bots can skew research data by providing false responses and misrepresenting the population being researched.
When bots infiltrate an online survey, a single perspective can dominate the results, biasing the results towards the bot responses and away from real people. As bots are so fast at finishing surveys (compared to humans), they can rapidly surpass genuine human responses.
This data may jeopardise important research designed to benefit real people.
How can researchers fight back?
There’s no sure-fire way to fight back. But there are some strategies that can reduce the chance of your survey being overrun by bots, or at least identify responses generated by bots. We suggest:
-
Think carefully about how you design your online survey – if you provide too much context with each question, bots will use this to provide convincing-sounding answers by leveraging large language models such as ChatGPT.
-
Don’t advertise your online survey through general social media, as this is the most likely way to attract the attention of bots that spend their day trawling between your research post and that of your neighbour posting videos of their dog.
-
Make it difficult for bots to complete your survey by adding “challenge questions”. These take advantage of typical bot programming – where most answer each question without memory of their previous answers. When asked the same question in a different manner, bots will usually get the challenge question wrong. An example of a challenge question is asking the participant for their date of birth, and later in the survey, asking them for their age in years.
-
Use the options built into your online survey platform to assist with bot detection. These tools can help to automatically flag survey responses that don’t look genuine. You’ve probably encountered examples of these tools before, such as when you are asked to click on all the squares with traffic lights in a picture before finalising your online activities.
-
Check the survey response metadata – the information automatically collected with survey responses that can tell you the geo-location, network address, and how long it took to complete the survey.
It can be difficult to resuscitate an online survey after it’s been invaded by bots, so prevention is the best medicine.
Our case study is a cautionary tale of how online bots can infiltrate survey responses, and how you can reduce the risk from the outset.
About the Authors
-
James bonnamy
Research Fellow and PhD Candidate, School of Nursing and Midwifery, Faculty of Medicine, Nursing and Health Sciences
James plays a key role in supporting the educational research portfolio at Monash University’s School of Nursing and Midwifery. He is leading several projects focused on participatory action research, collaborating closely with consumers to co-design health professions education. His work has examined the experiences of nurses who supervise students exhibiting unsafe practice or underperformance during clinical placements. Additionally, James has explored student-driven feedback processes. His research focuses on supporting health professions educators in integrating planetary health education into their curricula.
-
Cliff connell
Associate professor, School of Nursing and Midwifery, Faculty of Medicine, Nursing and Health Sciences
Chris has more than 30 years nursing experience, the past 20 years as an emergency nurse, ED clinical nurse educator, university lecturer and researcher. He is a Monash nursing and midwifery postdoctoral research fellow and BN (Hons) course coordinator, Cliff’s research interests focus on patient safety, deteriorating patient outcomes, emergency care and evidence-based education.
-
Michelle lazarus
Professor, Faculty of Medicine, Nursing and Health Sciences
Michelle is the Director of the Centre for Human Anatomy Education and Curriculum Integration Network lead for the Monash Centre for Scholarship in Health Education. She leads an education research team focused on exploring the role that education has in preparing learners for their future careers. She is an award-winning educator, recognised with the Australian Award for University Teaching Excellence in 2021, a Senior Fellowship of the Higher Education Academy (SFHEA) and author of the Uncertainty Effect: How to survive and thrive through the unexpected.
-
Bethany carr
Lecturer and PhD Candidate, School of Nursing and Midwifery, Faculty of Medicine, Nursing and Health Sciences
Bethany joined Monash University’s School of Nursing and Midwifery in 2016, having previously worked as a clinical midwife specialist at a large tertiary maternity hospital in Melbourne. She’s an experienced clinical midwife, and has worked as a researcher for two multi-centre trials.
Other stories you might like
-
The rising danger of AI-generated child exploitation
As we grapple with online dangers such as cyberbullying, pornography addiction, harassment, and scams, a new and deeply unsettling threat has emerged – deepfake technology.
-
Going deep: Understanding the mechanics of deception and disinformation
Disinformation has grown to pandemic proportions, driven by digital networks and social media. Understanding its mechanics, from cognitive biases to advanced digital technologies, is crucial in combatting its global impact.
-
Episode 95: Can We Create a Better Reality?
While AI and robotics reshape our reality, experts explore how these emerging tools could be used to create a more equitable future – from healthcare breakthroughs to Indigenous-led innovation.
-
Episode 94: Will AI Cut Us Off from Reality?
In the season nine premiere of Monash’s podcast, learn how AI, deepfakes and humanoid robots are transforming human interaction and our perception of reality.