AI in research
5 Sept 2023
Ethical Integration of AI in ‘Don’t Ask’ Market Research
By
Rachel Moreau
The landscape of market research is undergoing a rapid transformation, driven by the integration of artificial intelligence (AI). In this article, we delve into the valuable insights offered by The Association of Survey Computing (ASC) conference titled "Don't Ask!," held on May 10 in London 2018. This illuminating seminar offered a glimpse into the future of market research with its intriguing subtitle, "Gaining insights without questions" and had three primary focuses; the regulation of ethics in data collection, an exploration of new applications in AI, and the ethical use of such innovations. The insights garnered from the seminar not only provide guidance for market researchers and anyone seeking to harness AI's power, but also underscore the paramount importance of ethical considerations in this evolving landscape.
Exploring Innovation with AI
The “Don’t Ask!” seminar was made up of market research and technology professionals, each sharing their insights on the transformative role of AI-driven applications in data collection and interpretation. A number of these experts presented exciting new technologies for “gaining insights without questions”, tools which enable market research to uncover implicit and unspoken insights from their qualitative data.
Matt Celuszak from CrowdEmotion unveiled a fascinating array of applications at the cutting edge of emotion analysis - from eye tracking and facial coding to deciphering body language and delving into neuroscience. These tools were enabled by webcams and sophisticated software, allowing researchers to revolutionize passive data collection, detecting unspoken meaning from respondents in qualitative studies.
One such technology, presented by Pedro Almeida of Mindprober, was a set of discrete sensors that were able to capture minute emotional responses from participants. Using facial recognition and nervous system gauges, the sensors were able to pick up a treasure trove of data unseen to the naked human eye. The machine could identify unspoken emotions, ambivalence or uncertainty in the speaker’s voice, and could rank given responses based on their perceived veracity.
Meanwhile, Maarten Bossuyt of MyForce discussed AI-powered voice recognition, and how new technologies could pick up on subtleties of tone and intonation to startling accuracy. He gave the example of voiceprints replacing passwords as evidence for how subtle, seemingly unnoticeable indicators in speech will increasingly become significant markers of identity and expression, which will come to be recognised and analysed by technology.
In both facial and vocal recognition, insights derived by machinery can be plugged into complex neural networks to identify trends or predict future responses. In the case of Almeida’s sensors, he explains how the system was able to predict future reactions to advertising and even soap operas based on the insights it uncovered during the initial round of fieldwork.
Generative AI tooling has only amplified the ability for predictive responses for “gaining insights without questions”. AI tools such as CoLoop synthesise participants’ responses, allowing researchers to ask new questions of the data at any point after the fieldwork is completed. Whilst the input data remains fixed, the ability of AI to analyse, synthesise, and predict using this data ensure endless possibilities in terms of the questions and answers researchers can produce.
As well as unlocking potential for the ways in which qualitative data is collected (through visual or vocal sensors, for example) AI tools such as CoLoop allow researchers to ask questions they may have missed during interviews and receive generative answers based on the data which they were able to collect. By synthesising responses from participants, CoLoop is able to identify themes, create personas, and reveal implicit answers to questions that may arise after fieldwork and moderation is complete.
Each of the technologies discussed so far represents an exciting avenue of exploration for market research. However, despite the name of the seminar “Don’t Ask”, many ethical questions lingered regarding the practicalities of extracting implicit, unspoken insights from participants during qualitative studies…
The Ethics of ‘Don’t Ask’ in an AI World
The ethical implications of ‘Don’t Ask’ technology was a popular topic of debate throughout the seminar. Adam Phillips of Real Research highlighted how consumer trust in market research technology can be swiftly eroded. He gave the example of the notorious Cambridge Analytica incident, and noted the widespread consequences of mishandling personal data through unauthorized harvesting.
The Cambridge Analytica case study illuminated the urgency for comprehensive data protection measures like GDPR, which aims to safeguard individuals' personal data. The challenge of obtaining consent, especially in the realm of passive data collection, has only gained prominence.
For example, whilst respondent’s may consent to the information they explicitly provide a researcher with, where can the line be drawn with implicit, hidden, unspoken, or unconscious reactions? Extracting emotions from facial expressions, voice tonality, or physiological signals raises questions around the nature of consent during qualitative studies. The potential arises for individuals' emotional states to be interpreted and analyzed without their knowledge or permission, infringing upon their privacy rights.
It is one thing for these insights to contribute to the analysis of a specific study, but ethical uncertainty has been amplified by the increasing ability of AI to use these insights to predict or generate responses in future studies. In the realm of marketing, consider an AI-driven model tasked with predicting consumer emotions to tailor advertisements. A participants’ responses may be used by a brand to predict responses to future marketing campaigns. Alternatively, a participants’ responses could be used to create a custom algorithm for a brand to represent different segments of their target audience.
As well as throwing up questions regarding consent and privacy, modeling responses can have potential downside for the brand and its wider market, too. If this model is not trained with a diverse range of emotions from various demographic groups, it could inadvertently overlook or misunderstand certain emotional responses. Consequently, the advertisements it generates might end up insensitive or inappropriate, failing to connect with a wider audience. This mismatch could lead to reduced engagement and harm the brand's reputation.
Evidently, applying a ‘Don’t Ask’ approach to future market research practices is not without risk. To harness the potential of AI-driven analysis effectively, it's crucial to apply ethical practices, ensuring fairness, openness, and privacy protections to prevent such pitfalls.
Where do we go from here? Ethical Solutions in the realm of ‘Don’t Ask’…
In the ‘Don’t Ask’ seminar, a number of potential approaches to ensuring ethical standards emerged.
One such approach presented by Adam Phillips was the concept of "legitimate interest". In essence, legitimate interest refers to a valid reason for organizations to collect and use personal information. A critical aspect of legitimate interest is that it must strike a balance between organizational interests and individuals' rights. This balance hinges on reasonable expectations people have regarding data use based on their relationship with the entity collecting it. Participants should be made explicitly aware of the data that will be collected from them, both explicit and implicit, and how it may be used in future research. Only then, should ‘Don’t Ask’ analysis take place.
Another critical discussion was around the role of human oversight. Speakers implored market researchers to ensure AI-generated models undergo careful scrutiny throughout the analysis process. In particular, the speakers suggested that researchers should address whether AI generated discoveries resonate with their own experiences and analyses whilst moderating the projects. Their conclusion was that human-derived insights should not be disregarded or ignored whilst facilitating AI-generated outputs.
Throughout the seminar, consent and human-interaction continuously arose as the key forces ensuring the maintenance of ethical standards in market research. Those involved in the seminar unanimously agreed that as the new frontier of AI-driven data collection and interpretation moves forward, the need to develop stringent ethical guidelines, thorough bias mitigation strategies, and transparent accountability mechanisms becomes all the more vital.
Conclusion
In summary, the seminar’s main talking points underscore the symbiotic relationship between AI-driven and human conducted research. The integration of AI and digital tools brings novel opportunities for data collection, yet traditional data interpretation remains instrumental in addressing knowledge gaps. For CEOs and market researchers, this seminar provides a treasure trove of insights to navigate the ethical integration of AI. Discussions around data privacy, consent, and the role of human oversight form guiding principles for ethical practices. By immersing themselves in the diverse viewpoints and examples presented, researchers can confidently embrace AI's possibilities while adhering to the highest ethical standards.
Related
News