AI and the campaign for each individual vote


Lutfi Hakim

It’s generally accepted today that people get their information from a variety of sources: official, commercial, social. There’s a pick-and-mix approach to information consumption which means that the audience can be individually segmented.

We might all be watching the same badminton match on TV, but once the match is finished we load up our preferred sports news portals to read their takes on the match. Some might choose to read in BM, some might choose more critical commentators, some might look for a more technical breakdown of the game, some might look for funny views with plenty of gifs.

The fact that there is a variety of viewpoints isn’t what’s new, the fact that there is now so many of them is. So much so that the only thing that people have in common is that they all watched the game.

The fact that technology has caused the audience to become so highly segmented is a problem for publishers and broadcasters because it’s led to lower advertising revenues, but for those who seek to influence society, it has opened up the door to new possibilities. Marketers today have the ability to define to a granular level the type of person that they would want to target, and run multiple experiments over the best way to influence them.

This method of hypertargeting has been made possible by the miles of breadcrumbs that we leave behind when we are on the internet. The pages that we visit, the things we post, the apps we can’t live without; we don’t pay a cent for any of this but in exchange, we agree to corporations’ collection and use of our data to be used for advertising purposes. All of our likes, watch history, browsing habit;, bit-by-bit it builds up a profile of our digital selves.

Research has shown that computer models have been to be able to predict broad personality traits from analysing as few as ten likes from their Facebook account. The model would know the person better than a family member from 150 likes, and with 300, more than a spouse.

An average Facebook user has 227 likes, which is more than enough for an approximation of their personality. Marketers can tap into this data to create ads that try to influence you by speaking directly to your concerns. Since you’re always logged on to your online accounts, you’ll find these apps following you all over the internet, badgering you again and again.

This is a concern on a civic front as this technology can be used to promote views that sow division away from the public scrutiny. If leaders in the past have had to gravitate to the centre when presenting their ideas and perspectives for public scrutiny, the ability to operate below the radar allows influencers to capitalise on individual fears and concerns to push them towards stronger opinions. A variety of messages will be prepared and trialled algorithmically for each type to see which is most efficient, which could be down to tone, language, triggers, or other components of the message. Each person targeted could potentially get a different version of the ad, and those who are not of interest to the campaign runners won’t even know of its existence.

It can also be used to persuade people to stay home. During the 2016 American elections, the Trump campaign sent social media ads that targeted selected black voters and immigrant neighbourhoods with controversial statements from Hillary Clinton. The dark posts, invisible to anyone who’s not targeted, was used on key groups of Democratic Party voters. In marginal seats, keeping a slice of your opponent’s support base away from the ballot box makes a difference.

It goes without saying that this all costs a lot. Organisations with access to bigger resources will be able to put in higher bids for ads, and run more sophisticated campaigns. This raises a number of civic concerns on the use of data in trying to influence people. 

From a party political standpoint, there is uncertainty over how to control political spending on these types of campaigns. These campaigns, which combine the creation and maintenance of large datasets, the crafting of multiple messaging strategies and content, as well as the development of complex algorithms that continuous tweak a campaign’s effectives, they certainly do not come cheap. In more sophisticated examples, algorithms are used to iterate content (e.g. videos, articles) than better persuade its targets. Being able to operate in the dark allows these campaigns to push campaign content that push people’s emotional buttons and capitalise on biases.

Scholars and civil society activists have already begun voicing out their concern that campaigns of this nature will be a strong tool in the arsenal of well-resourced parties or lobby groups.  As the technology is new, it remains unclear if there is sufficient legal oversight over advertising via social media and ad networks.

In the UK, an investigation in underway on the targeting of voters on social to see if there was a breach of personal data laws. While official remedies and guidelines over the use of these campaigns would be greatly helpful, ordinary people can also help mitigate their effects. If we come across promoted posts on our feeds that we find appealing, just keep in mind that there are professionals at work running complex operations to get your eyeballs. Give it a read if you want, but make it a point to look for a second opinion before making your choice. – June 5, 2017.

* Lutfi is a fan of podcasts, cinema, dry history books, and technology (not neccessarily in that order). Currently studying for a Masters in political communication in Sheffield, he is an associate of IMAN Research.

* This is the opinion of the writer or publication and does not necessarily represent the views of The Malaysian Insight. Article may be edited for brevity and clarity.


Sign up or sign in here to comment.


Comments