SUBSCRIBE TO OUR BLOG
Yet at the same time, how these technologies are developed must reflect our rich and diverse cultural heritage and humanistic world view. Only then will they power applications that are innovative, and social- and user-centric.
Data, connectivity and new technologies can help people gain access to education, work, health services and mobility. They also help tackle issues such as inequality, poverty and environmental threats. New technologies will transform society in a vastly positive way, but only if they are applied carefully, and with consideration of their wider impacts. Many data-driven projects maintain explicitly positive social goals, such as Fluttr, a digital hiring platform that removes bias from the hiring process, and Accenture’s “Million Meals” initiative.
Accenture’s “Million Meals” project taps into a mixture of IoT, blockchain and AI to help provide millions of meals to Indian schoolchildren. Blockchain gathers feedback, IoT sensors measure food delivery and AI predicts what food is needed tomorrow.
As the possibilities of data-driven innovation expand, so too does awareness of the negative implications. Human bias present in training data can easily be amplified to create machine learning models that are inadvertently discriminatory. The implications may be especially critical in high-stake scenarios, such as immigration and law-enforcement, but also in situations such as personnel hiring and financial-liability assessments. Similar bias can also stem from homogenous training data. A lack of diversity in training data, for example, has been suggested as the cause of the Google Photo AI mislabeling black people as gorillas.
We can think of AI as a mirror – something that reflects not only the data it’s trained with, but also the typical cultural homogeneity of those who develop it. Neural networks and machine learning may be great for uncovering and amplifying patterns, but it often does so without consideration of whether those patterns have positive or negative impacts on humankind. By mirroring our society, AI forces us to face up to our biases and flaws. We must therefore critically question, reevaluate, and redefine our values and beliefs to ensure the impacts of new products and services remain positive. A greater emphasis is already being placed on ethics and data standards, and many frameworks are being developed to address these challenges.
Fast.ai’s mission is to make deep learning easier, and to involve more people from all backgrounds during all stages of AI development.
To help technology deliver a more positive future, we need to embrace a common and global ethics framework, applicable both for data collection (enforcing privacy regulations and anonymization) and training-data choices. Increasing awareness of potential risks should also mean that greater emphasis is placed on developing solutions that are ethically and morally sound.
OrCam’s MyEye is an assistive device for the blind and visually impaired. It uses AI to read text, recognize faces and even identify products, transforming the lives of the visually impaired. © OrCam
Efforts to make technology fairer and more ethical must be matched with a push towards diversity and inclusivity in the real world. Debate also continues as to what exactly constitutes “ethical and responsible” AI development – there will be no simple answer to this question, which will vary significantly across different cultures and countries. The question therefore remains: is technology a unifying force for good that bridges divided societies and nationalities, and unites humanity?
The full report is available for download.