In our research on transportation and urban mobility at HERE, the latest developments in Big Data science certainly hold considerable promise. However, there are some big gaps in its potential use right now that I’d like to highlight.
Multitudes of sensors and machines are collecting huge amounts of data. New communications networks, like Sigfox, are dedicated to machine communications to enable this data to move quickly and efficiently. Software services, like Hadoop, help store and process ever larger amounts of data.
The goal is to deliver real-time services that are more personalised and allow an on-demand world.
We might call this vision a data-driven world.
This data, though, is created by machines, not by humans. That has consequences for its usefulness. In the past, if you wanted to know more about driving habits, you would run a survey of drivers, asking them "When do you feel tired driving?", "What conditions make you worried about your safety?" or "How many breaks do you take during your trip?”.
Now, instead, we have tools like face detection, car dongles, and body monitoring. We can also access calendars, social media interactions and eating habits.
With those, companies might create driver profiles to make driving more relaxed and safe.
However, solid, quantitative data about emotions and unconscious feelings is still not detectable. Machines (so far) can only collect facts, and use algorithms to try to make sense of those facts.
For example, face detection could register that I am frowning. At the same time, my car dongle might report I am driving faster than usual. Let’s imagine my car assistant (be it a robot, the car’s internal system or any other interface) might infer from these readings that I am grumpy. Is that useful information – is it relevant to anything the car assistant can provide?
There is a misconception that “You are your data”, but to infer real meaning from data about humans we would need a dictionary of human causes and effects.
Unfortunately, there is no such thing. One reason is that humans are irrational by nature. Our stomachs and hearts have as much power over our decisions as our brains.
The other reason is that humans are driven by needs. Primary needs like sustenance and safety, and secondary needs like self-esteem and recognition.
Given this, will artificial intelligence be able to understand human needs based on the data detected by multiple sensors?
Some needs seem easier to understand than others. For example, many wearable devices promise to understand your sleep quality and to help you towards healthier habits.
By registering certain body parameters like your heart-rate or breath, the algorithms infer whether you slept enough or if you are unwell.
But haven't we all passed sleepless nights studying and then performed brilliantly the next day at school? Some measurements may be factually correct but our inner-self cannot yet be tracked and completely understood.
A week of laughs, of shouting, a week of doing something new. Machines are not yet able to record these events, and even if they could, would they know why a joke was so funny to me and not to somebody else? Giorgia Lupi and Stefanie Posavec, two designers from London and New York, have created a brilliant project visualising “cultural” data over one year and made the very valid point that this material is as important as scientifically tracked parameters.
"Data can make us more human, and connect with ourselves and others at a deeper level." Rethinking the way we see data collection, part of our Dear Data book reflections on the data world. Copies are available for pre-orders! www.dear-data.com #deardata #drawing #bookquotes #bookstagram #books #data #mailart @papress @particularbooks
A photo posted by Dear Data (@deardatapostbox) on
On the side of science, Stuart Russell, Professor of Computer Science at the University of Berkley, has been working on a framework for the study and application of artificial intelligence.
He believes that the definition of A.I. must change.
He gives a very clear example: a robot has the objective to take care of the kids at home. It notices that they are hungry, the fridge is empty and the cat is around. Would it kill the cat because of its nutritional value or would it understand that the cat has a much higher emotional value?
Instead of programming robots to fulfil a particular objective - Russell believes - they should be programmed to have no other objective than making humans happy. To achieve that, robots would have to read everything that was written by humans, analyse them to infer human values and stick to these values in any decisions they might take. Uncertainty would not be a problem, but rather a reason for robots to learn more and more, and eventually to become even smarter than human beings.
If, on one side, Big Data is a great resource and enabler for new services, human needs are not yet so easy to detect, classify or use to make decisions. But science is progressing very fast, and an index of human values might be a first attempt in the direction of making technology an ally to human evolution.