Written by Riya Wadhwani, Ph.D. Student at Indian Institute of Management Udaipur, India
Picture yourself strolling to a neighborhood park only to find yourself paparazzied by the relentless gaze of smart cameras, meticulously tracking your every move, route, and even your biometric details. This intrusive surveillance has the potential to encroach upon your personal life as this data may find its way onto social media platforms. Despite these privacy concerns, the functional benefits of such AI applications cannot be overlooked. They play a crucial role in identifying air quality levels, optimizing traffic flow to minimize congestion and accidents, and aiding in the detection of criminal activities. The question is, how would a consumer weigh these perceived benefits against the potential costs of sacrificing privacy and autonomy in public spaces?
However, the equation shifts in commercial contexts, where individuals are often willing to provide biometric data such as facial or fingerprint scans for smartphone authentication purposes. Why is there increased willingness to share personal data in commercial contexts? This willingness to engage with AI in different contexts underscores the complex interplay between benefits and costs associated with convenience, privacy, and perceived utility.
Prof. Matilda Dorotic (Associate Professor in Marketing, BI Norwegian Business School), Prof. Emanuela Stagno (Lecturer in Marketing, University of Sussex Business School), and Prof. Luk Warlop (Professor of Marketing, BI Norwegian Business School) delve into these tensions and trade-offs in their paper titled “AI on the street: Context-dependent responses to artificial intelligence (Issue 1 Volume 4- March 2024),” highlighting the nuanced perspectives individuals hold when adopting AI in public settings and how the implementation of AI influences their evaluations of its benefits and costs.
Will You Hit 'AGREE' to enable AI on Your Phone?
"Consider the role of AI in healthcare. On one hand, it holds tremendous potential for making diagnoses faster and more accurate, something that excites computer scientists. However, this very same solution has the potential to exacerbate existing inequalities within healthcare systems. While these disparities are not caused by AI itself, they are already inherent within the system. It is crucial to understand AI holistically in terms of how all these social issues are spilling over not only in the creation of AI but towards its application, evaluation, and its overall impact on the society."
-Matilda Dorotic
The genesis of the idea stemmed from a collaboration between law enforcement, municipalities, and computer science academics in the AI4Citizens project, funded by the Norwegian Research Council. Through this partnership, practitioners expressed the challenge of balancing citizen well-being with societal benefits in public AI applications.
There's abundant research on AI, yet it often remains compartmentalized and lacks holistic understanding. Computer science primarily focuses on technical solutions and their assumed benefits, such as cost reduction and faster data analysis. However, this perspective is disconnected from consumer evaluation. Conversely, social fields like psychology and marketing analyze consumer reactions but lack integration with technical solution development. This divide makes it challenging to predict the full societal impact of AI introductions, particularly in public applications. Such applications may prioritize technocratic solutions without considering broader implications, leaving them vulnerable to unforeseen consequences.
This led Matilda to investigate how perceptions of AI differ between commercial and public contexts, emphasizing the need to assess benefits and costs within each context.
Public AI >> Commercial AI- Should we be concerned?
"Interestingly, the acceptability of smart surveillance cameras was stronger for public applications (e.g., on a street, underground or public transportation stations) than for commercial contexts (e.g. to monitor people in stores)."
-Matilda Dorotic
The findings reveal a stark contrast in people's perceptions of surveillance technologies across different contexts. For example, while concerns about privacy and loss of control are significant in surveillance AI, they're often overlooked in commercial and public infrastructure applications. In surveillance AI, fears and privacy concerns outweigh perceived benefits, whereas in commercial and infrastructure AI, benefits are prioritized. Public infrastructure-directed AI, such as traffic control and air quality monitoring, is perceived as least intrusive (individuals feel the least 'exploited' and most 'served'), with lower personal costs compared to commercial applications like chatbots. However, it's crucial to recognize that public AI, while offering benefits like energy optimization and crime tracking, also poses privacy risks just like surveillance cameras do.
"It was surprising to find that for high-risk applications, in European countries where most of the respondents were recruited, people are more willing to trust the government (public entities) than commercial entities. Believe me, this has real consequences!"
-Matilda Dorotic
These findings provide a big shoutout to policymakers in considering these context dependencies and ensuring anonymization and transparency. It is tough to make people adopt public AI because, people don't fully realize the benefits of AI for society, prefer personal gains, and worry a lot about privacy.
These findings empower marketers and AI developers to create apps that can truly connect with users, making them see the benefits while keeping costs low. It's not just for businesses though – policymakers can use it to figure out the best ways to use AI in public places, thinking about things like privacy and how it helps society.
So, how likely are YOU to enable AI on your phone?
Read the Paper
Believe me, there is a LOT more to what you just read here Click to read more.
Want to cite the paper?
Dorotic, M., Stagno, E., & Warlop, L. (2024). AI on the street: Context-dependent responses to artificial intelligence. International Journal of Research in Marketing, 41(1), 113-137.
Meet Matilda Dorotic
Associate Professor in Marketing, BI Norwegian Business School
Visiting Researcher at Harvard University and MIT
"I am one of those coauthors prone to opening up many new Pandora’s boxes, so I am glad that I have had coauthors who have helped me to come back to focus on the selected issues we want to look at. Emanuela is a great implementor of ideas and Luk’s strengths is in application of theories. I feel that as a team we have complementary skills."
-Matilda Dorotic
A ritual/practice/exercise that you can’t miss or start your day without?
Making breakfast for my children 😊
If you would not be a marketing researcher, what would you be?
A scientist in biology, genetics – I would be a scientist – it is in my deepest fiber. Alternatively, I wanted as a child to be a missionary nurse at Mother Teresa’s hospitals, she was my inspiration in my early days.
If you could retain only one concept in marketing, what would it be?
Customer centricity!
Who is the researcher, from any field you would like to sit to lunch with, what would you say to him/her?
I have met so many and have been fortunate to have lunch with some of the many of the researchers I admire. What I would do is have lunch with numerous scientists from different fields and regulators on the other, to see how the true interdisciplinarity would work. What a crazy lunch that would be.
This article was written by
Riya Wadhwani
Ph.D. student at the Indian Institute of Management, Udaipur (Rajasthan, India)
Comments