+

Controlling, addictive AI needs immediate attention

At the core of the concerns are the nature and design of algorithms that influence our choices. Online consumption, for instance, is not a free choice. Algorithms prod, poke and drive the consumer into a narrow set of choices which they may not have selected otherwise.

Can we govern the ungovernable? Should we even try to contain the advance of algorithms? These difficult questions don’t have a simple answer. However, what is clear is that the world needs a strong governance structure to shape the impact of algorithms and AI on our lives.

At the core of the concerns are the nature and design of algorithms that influence our choices. Online consumptions for instance is not a free choice. Algorithms prod, poke and drive the consumer into a narrow set of choices which they may not have selected otherwise.

“It is important to have ways to oversee the operations of these systems to ensure they are helping, not harming, humanity. The flurry of governance frameworks over the past two years has been crucial in helping leaders to better understand the issues surrounding AI, including potential for fairness and discrimination, disparate impact, and the associated issues of transparency and accountability,” says a recent report by World Economic Forum (WEF). “But much more innovation in the realm of AI governance is needed if we are to keep pace with both the advancement and application of AI-based systems,” says the report titled The AI Governance Journey published in November.

Until recently unfair market practice in the retail sector largely revolved around predatory pricing. In some cases, it involved using market muscle to prevent rivals from expanding their consumer base.

Today, unfair market practices are often baked into the business model using tech-based platforms of e-commerce companies. Anti-trust authorities in most free-market economies including India are trying to peek under the hood of the engines that run e-commerce sales.

Parts of the unfair play in digital markets are easier to see. Some e-commerce companies own a big chunk of a seller and therefore find it in their interest to promote that particular seller.

Other parts of unfair trade practice involve using algorithms that allow collusion between seemingly independent companies or manage reactive pricing which can hurt smaller sellers. The e-commerce may say that algorithms don’t choose for the consumer; consumers choose for themselves. However, the facts say otherwise.

The question now is not whether consumers choose or not. The question is what is their choice? Are the options available to the consumers open and fair? More importantly, do the sellers have equal access to the consumers in the market. Today this paradigm is often decided by the software robots who run the digital markets.

“It will be important to monitor developments in the application of machine learning and Artificial Intelligence to ensure they do not lead to anti-competitive behavior or consumer detriment, particularly in relation to vulnerable consumers,” says the Competition and Markets Authority (CMA) of the UK. There are examples where an e-commerce site has shown different prices to different customers depending on their location. A CMA paper notes, “It has been alleged that Staples’ website displayed different prices to people, depending on how close they were to a rival brick-and-mortar store belonging to OfficeMax or Office Depot.” Similar investigations are required in India and other emerging economies to ensure that algorithm-triggered personalized pricing does not become harmful.

Another type of antitrust activity takes place when online rivals decide to use the same pricing algorithm to align the prices of different products. When questioned by regulators or anti-trust authorities, e-commerce companies like to say that the decision taken by an algorithm is not their responsibility. However, authorities including the Competition Commission of India are challenging this.

At their root, anti-trust or anti-monopoly laws aim to ensure that consumers and sellers have the freedom to choose and compete on fair terms. A few sellers should not be allowed to dominate any market to the extent that other sellers are destroyed and therefore consumer choice is undermined.

Most regulators struggle to find proof of such activity as the level of sophistication is increasing constantly. Some are already unleashing their own algorithms to track and understand the pricing software of e-commerce companies. While companies collude on pricing, governments are collaborating on curbing online malpractices. The legal liability of an algorithmic decision will be interpreted as the legal liability of an entity of an individual. Anti-trust activities of algorithms should not go unchallenged in any economy.

Similar governance rules are needed for the algorithms used by social media giants. Privacy and data protection are often the key issues when debating the regulation around social media giants. However, an important dimension that needs more attention is the algorithms that decide, define, and drive online user behavior.

Even as various countries across the world battle social media giants for lack of transparency and accountability, some governments have begun to question the algorithms too.

The US Senate Judiciary Committee recently held hearings on “Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape our Discourse and Our Minds.”

Like many countries, the US is concerned about the algorithms which are designed to addict. “… This advanced technology is harnessed into algorithms designed to attract our time and attention on social media, and the results can be harmful to our kids’ attention spans, to the quality of our public discourse, to our public health, and even to our democracy itself,” said Sen. Chris Coons (D-DE), chair of the Senate Judiciary’s subcommittee on privacy and tech

In the same way that India has the social media intermediary rules and laws, US has the Section 230 of the Communications Decency Act which offers some immunity for website platforms from third-party content.

The Senate hearings could lead to amendments in Section 230. Another Senator at the hearing said that the business model of “these companies is addiction.”

A legislation called ‘Don’t Push My Buttons Act’ has been introduced in the Senate with Tulsi Gabbard as the bill’s lead co-sponsor. The law would require that platforms with more than 10 million users should get user permissions before offering them content based on past behavior.

Basically, this means that companies can’t access our behavior and drive us further into similar content. This behavior is believed to be particularly harmful during Brexit conversations. Rather than allowing people to explore and stumble upon new content and alternate views on a subject, the algorithms drove users into more of the same. Effectively, it created online echo chambers and prevented people from absorbing other ideas.

The same principle can apply to consumer products or services. Algorithms can drive consumers to certain brands, categories while reducing choice and therefore hurting competition.

The laws will seek changes in Section 230 and remove the protection offered to the giants if they persist with addictive algorithms. Companies including Facebook, Google, and Twitter have testified at the Senate hearings on addictive algorithms.

While the hearings are focused on US citizens, governments in other countries should also be alert about the consequence of addictive algorithms. As the government of India is establishing the rules of play for social media giants, it will be important to scrutinize and question addictive algorithms. With an addressable market of over a billion users, the tech giants will invest a lot of resources to increase their users. The variety of languages and users in the country lend themselves to using algorithms that use personal data for greater effect.

India has to put in place legislation and rules which seek more clarity and transparency from technology companies. Domestic and global companies that use consumer behavior data to enhance addictive behavior must be scrutinized and controlled.

Currently, the intermediary guidelines focus mostly on content management and grievance redressal. However, the underlying software engines that influence online consumer behavior need oversight too.

The WEF report has made some suggestions for the future. The world needs, “Standards providing a framework for responsible AI. Standards for measuring bias, fairness and related technical details – Processes and tools for assessing AI systems.” The regulation of algorithms that define AI and thus our choices will have to be made at several levels. From Multilateral to national to local, depending on the sector, geography, and usage.

The writer is the author of ‘India Automated: How the Fourth Industrial Revolution is Transforming India’. Views expressed are the writer’s personal.

Tags: