The Pew Research Center published a survey this month about the rise of Facebook and Twitter as a valid news source for many Americans. 63% of users get their news from those platforms.
Which raises questions about the artificial intelligence system used by these platforms in order to give their users the information they want.
At the moment, the intelligence system is fairly similar across search, news readers and social networks. The user interacts with the system; the system logs certain behavioural tendencies; the system builds up a picture of your behavioural tendencies which mean you will find it easier to exhibit those behavioural tendencies in the future.
Brands then market their services stating it as a method of mapping your ‘likes’ and ‘dislikes’, and this should be a good basis for filtering the information you see online. You can read as much in Facebook’s blog concerning their news feed.
But this is wrong. Behaving in a certain way does not mean that you condone that behaviour. It certainly does not mean you want an intelligence system to remember tendencies and to make it easier to exhibit those tendencies.
In fact, behavioural tendencies (or thoughts) are used privately to test drive how you may act in a certain situation. Many different scenarios are examined before you set on a course of action. We filter our thoughts to get to actions. It is why we should be judged on our actions and not on our thoughts.
As a result, in the current intelligence system, we are logging behavioural tendencies that we may disapprove of. It leaves us out of control.
Given that we are not in charge of this filtering process, it means these processes can be manipulated. In Google search, companies manipulate the results by adhering to SEO best practice. What’s more, Facebook has experimented with changing users moods according to minor tweaks to the algorithm.
What are algorithms and what should they aim to do in filtering?
One definition of AI is it is the process of understanding the way that humans operate, communicate and behave, so that we can replicate that behaviour on a large scale through computing power. This is a conclusion I reached whilst at The Computer History Museum in Mountain View.
This is because getting better at coding computers is like having a greater understanding of the way we logically construct our language. And our language is a good window for understanding the nature of our intelligence.
Eliezer Yudkoskey of the Machine Intelligence Research Institute says something similar: “I keep trying to explain to people that the archetype of intelligence is not Dustin Hoffman in The Rain Man, it is a human being, period.”
I’m arguing that in order to replicate human thought and action, our AI system needs to include some sort of ‘consent’ in the filters we use. The system cannot observe our likes and dislikes, we have to tell the system our likes and dislikes.
In some cases, we are looking for facts. ‘Where is the nearest bank’ or ‘How long does it take to get from Westminster to Kings Cross?’. These are simple statements and simple AI systems can be used to determine the answer.
However, when we are looking for information about contentious subjects like redundancy policy, privacy strategy, global warming, or international relations, it is vitally important that we consent to the filters that are put in place, or else we will be oblivious to systemic bias.
How often do you use a system of artificial intelligence without seriously thinking about the processes through which they have given you information?
The next question following this is; what system is available for intelligent professionals to filter the noise in a transparent way?