Firms are accumulating our biometric information. We’d like new safety measures

Imagine walking through a busy train station. You're in a hurry, weaving through the crowd, unaware that cameras are not only watching you, but also recognizing you.

Today, our biometric data is valuable to companies for security reasons, to improve the customer experience or to increase their own efficiency.

Biometric data are unique physical or behavioral characteristics and part of our everyday lives. Facial recognition is the most widely used.

Facial recognition technology comes from a branch of artificial intelligence called computer vision and is comparable to giving computers the ability to see. The technology scans images or videos from devices such as CCTV cameras and recognizes faces.

The system typically detects and maps 68 specific points called facial features. These create a digital fingerprint of your face, allowing the system to recognize you in real time.

Facial features include the corners of the eyes, the tip of the nose, and the edges of the lips. They help create a mathematical representation of the face without storing the entire image, improving both privacy and efficiency.

From supermarkets to parking lots to train stations, surveillance cameras can be seen everywhere, silently doing their job. But what exactly is their job today?

Companies may be able to justify collecting biometric data, but with power comes responsibility, and the use of facial recognition raises significant concerns about transparency, ethics and privacy.

If even the use of facial recognition by police can be considered unethical, then the business case becomes less compelling, especially since little is known about how companies store, manage and use data.

Collecting and storing biometric data without consent may violate our rights, including protection from surveillance and the storage of personal images.

For companies, balancing security, efficiency and data protection is a complex ethical decision.

As consumers, we are often hesitant to share our personal information, but facial recognition poses more serious risks, such as deepfakes and other identity fraud threats.

Take, for example, the recent revelation that Network Rail secretly monitored thousands of passengers using Amazon's AI software. This surveillance shines a critical light on an issue: the need for transparency and strict regulation even when a company is watching us to improve its services. A Network Rail spokesperson said: “When we use technology, we work with the police and security services to ensure we take appropriate action and we always comply with applicable laws on the use of surveillance technology.”

One of the biggest challenges is the question of consent. How will the public ever give informed consent when they are constantly monitored by cameras and do not know who is storing and using their biometric data?

This fundamental problem underscores the difficulty of addressing privacy concerns. Companies face the daunting task of obtaining clear, informed consent from people who may not even know they are being watched.

Without transparent practices and explicit consent mechanisms, it is almost impossible to ensure that the public is truly informed about and consents to the use of their biometric data.

Think about your digital security. If your password is stolen, you can change it. If your credit card is compromised, you can have it blocked. But your face? That's forever. Biometric data is incredibly sensitive because it can't be changed once it's compromised. That makes it a high-stakes game when it comes to security.

If a database is attacked, hackers could misuse that data for identity theft, fraud, or even harassment.

Another issue is algorithmic bias and discrimination. When using data to make decisions, how can companies ensure that diverse and sufficient data is included to train the algorithm?

Algorithms should include a diverse set of facial recognition data.
Frame Stock Footage/Shutterstock

Companies could use biometric data for authentication, personalized marketing, employee monitoring, and access control. There is a significant risk of gender and racial bias if the algorithm is trained primarily with data from a homogenous group, such as white men.

Companies should also ensure that digital bias does not persist, as otherwise it can lead to social inequalities.

Legislation and awareness

As facial recognition becomes more widely used, strong legislation is urgently needed. Laws must require clear consent before collecting a person's biometric data. They should also set strict standards for storing and securing this data to prevent breaches.

It is equally important that the public is made aware of the issue. While people are becoming more aware of data privacy, facial recognition often goes unnoticed. It is invisible in our everyday lives and many are unaware of the risks and ethical issues. Educating the public is crucial.

A good start would be to incorporate the principles of responsible AI into the use of facial recognition technology. Responsible AI values ​​fairness, accountability, transparency and ethics. This means that AI systems, including facial recognition, should be designed and deployed in a way that respects human rights, privacy and human dignity.

However, companies are not necessarily likely to prioritize these principles if they are not held accountable by regulators or the public.

Transparency is a cornerstone of responsible AI. If organizations that use facial recognition keep their practices secret, we cannot trust them with our biometric data.

Companies that only have your personal data can be very powerful when it comes to manipulative marketing. All it takes is “a like” to develop tailored campaigns that target you very precisely.

Meanwhile, political parties like the PTI in Pakistan are using Vision AI technology to enable Chairman Imran Khan to campaign despite being in prison.

AI allowed Imran Khan to address his supporters from prison.

Collecting and analyzing visual data is particularly important compared to non-visual data because it provides richer, more personal, and more immediate insights into human behavior and identity.

That's why its increasing use by companies raises so many concerns about privacy and consent. Unless the public knows the extent to which their visual data is being collected and used, their information is vulnerable to misuse or exploitation.The conversationThe conversation

Kamran Mahroof, Associate Professor, Supply Chain Analytics, University of Bradford; Amizan Omar, Associate Professor of Strategic Management, University of Bradford, and Irfan Mehmood, Associate Professor of Business Analytics, University of Bradford

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Comments are closed.