top of page
Siddharth Johar

Mediating Financial Inclusion through Informational Safeguards: Regulating Insurtech in India

The author is Siddharth Johar, second year student at National Law School of India University, Bengaluru.


Introduction


K.S. Puttaswamy v. Union of India (2017) (‘Puttaswamy’) has come to be a watershed moment in the constitutional history of India, not only for its affirmation of the right to privacy within Article 21 but also for providing sanctity to the concept of informational privacy. This sanctity was essential, given the increasingly intertwining relationship between technology and society.

This ‘intertwining relationship’ is visible in the emergence of innovative fields such as FinTech, transforming the ‘access, quality, and usage’ of financial services and allowing the inclusion of a much wider consumer base. This is particularly helpful for a third-world country like India where the penetration of credit and insurance services remains minimal amongst marginalised populations. However, this advancement is based on Big Data and Artificial Intelligence, where the measures of inclusion necessarily involve the collection of huge data repositories, while also raising pertinent concerns regarding privacy of individuals.

This piece aims to analyse this tension in the case of InsurTech services in India. This is done by first, highlighting that InsurTech services present fundamental risks to effective delivery of financial services and direct focus towards informational privacy, and second, measuring the inadequacy of present ecosystem of regulations to attend to the highlighted risks. In doing so, this piece argues that the regulation of insurance in the age of Big Data necessarily needs to involve considerations of privacy to achieve Financial Inclusion and help reach fintech services to the marginalised.


InsurTech in India and Big Data


The Indian FinTech market is one of the fastest growing markets across the world, estimated to reach $150 Billion in the next few years. InsurTech contributes an essential chunk of this market, defined as the practice of using innovation and technology to deliver insurance services. The primary actors operating this bandwagon are not only private entities such but also state entities, providing services ranging from motor insurance to health insurance under the Ayushman Bharat Scheme (‘ABS’).


These actors, while adopting digital innovations, have focussed on the three essential practices of underwriting, customer interaction, and claims management. These refer to the determination of risk posed by the policyholder on the basis of historical data and actuarial sciences and calculate their premiums, the optimization of the experience and access of policyholders, and the management of claim life cycles from their creation to their settlement, respectively.


The underlying basis for the efficiency of such practices is a huge repository of data and its processing, based on the tracked record records of the individual’s navigation online or digital footprints’ that encompasses categories of data ranging from financial to intimate personal information. The existence of network models such as Internet of Things (IoT) and statistical technologies such as Machine Learning (ML) and Predictive Analytics (PA), supplement the exponential creation of such data and processes them to create an ecosystem referred to as Big Data. Although these advancements provide tremendous benefits in terms of efficiency, accessibility, and affordability, they also lay down a path of concerning problems, as is analysed in the subsequent section.


The Pandora’s Box: Creeping Impact of Data Technologies


The importance of collection and processing of Big Data lies in its ability to create patterns and predictions about the individual, for allowing greater scope for personalisation, which also makes the individual and their information vulnerable to the online.


The latter lies at the core of Informational Privacy – that aims to protect an individual’s digital identity and ground the autonomy or their right to control, exploit and disseminate information concerning oneself.[1] However, this does not necessarily indicate that privacy and financial services are locked in trade-offs.

These technological developments apart from benefits of efficiency, bring to focus essential risks that act as an impediment to the effective delivery of financial services to the most vulnerable. This section attempts to highlight three such risks, namely data mining for commercial endeavours and algorithmic malpractices, black boxes and cyber vulnerabilities. In all these situations, the concerns of individuals and FinTech align – necessarily requiring Informational Privacy as an essential instrument and political liberty for the fair distribution of socio-economic benefits. First, the lack of historical data to assess risk of excluded populations, requires proxy sources or ‘alternative data’ which includes payment and credit histories, social media usage, browsing history, employment profiles, etc. The resultant effect, if unregulated, is the lack of a distinction between financial and non-financial data collected regarding individuals. This could provide scope for the identification of their group membership and denial of services on the same, and further distribution for commercial gains, given the rise of personalised feeds and targeted advertisements.

Second, the statistical technologies utilized for risk assessment and fraud detection are based on increasingly complex formulas to arrive at greater personalisation. Further, businesses aim to protect their technologies from competitors, resulting in the reduction of the intelligibility of their decision-making. These developments raise the bar for consumers to know and scrutinize how their data is being processed (especially when biases are encoded in these models), effectively creating a ‘black box’.

Third, these huge data-repositories and digital architectures in service of them often raises the insurer’s exposure to cyber-vulnerability. This vulnerability increases in cases of inconsistency in security standards across organisations part of a data aggregation ecosystem. The resultant effect is the control of external agents over individual’s data that could be utilized to their detriment, since the personalised nature of the data provides huge opportunities for fraud and impersonation, which eventually reduces consumer trust. This requires agencies to take adequate measures in ensuring up-to-date safeguards and transparency on breaches to ensure consumer trust.


These concerns necessitate the need for realisation of Informational Privacy, that shall allow greater transparency and control over management of data concerning individuals, while also helping the resolution of hindrances to financial inclusion. However, an essential concern is, to what extent do present regulations undergird this concept, which is analysed in the next section.


Present Framework: Old Data Practices Die Hard


The regulation of informational privacy not only requires comprehensive overarching legislations for protections and enforcement, but sectoral laws as well to attend to the privacy interests that arise in a specific sector. This regulation is premised much beyond the ‘notice and consent’ framework, recognizing it’s inadequacy in the age of Big Data with ubiquitous processing of data and lack of information on the data being organised by insurance providers – and even if known, consumers being burdened with threat of ‘content overload’ and ‘consent fatigue’.


Thus, this regulation envisages a constitutionally sound model of meaningful consent – to include deeming consent as continuous, till the processing and organisation of information, as well as attachment of obligations onto ‘data fiduciaries’ (in this case insurance authorities) wherever consent is ineffective.

The above highlighted risks, raises the question whether the present regulatory system is even capable of anticipating the troubles and rights of the information age. This system includes the Information Technology Act, 2000,[2] the Draft Health Data Management Policy under the Ayushman Bharat Scheme, and the Guidelines of Insurance Regulatory and Development of India (‘IRDAI’) – the limitations of the former two having been covered here and here. This piece aims to analyse the sectoral regulatory system under the IRDAI – highlighting not only the conceptual inadequacy of the regulations but also their textual inadequacy in responding to the concerns presented.


The IRDAI initiated a discussion on InsurTech in 2017 with Telematics and Motor Insurance, yet has not accounted for specific regulatory guidelines on InsurTech and Data Protection. The IRDAI (Regulatory Sandbox) Regulations, 2019 and Guidelines on lnsurance E-Commerce (2017), both of which attempt to realise the significance of technological development in delivery of financial services, stop short of giving recognition to data as essential to the functionality of InsurTech and ‘protection of policyholder interest’ – all while privileging the principle of ‘better reach’ as their statutory objectives.


This lack of acknowledgement is an indicator of the conceptual limitation of the regulatory system, that is magnified with the invocations of ‘confidentiality’ (s 9(1)(b)), ‘privacy’ (s 3(6)(ii), and ‘security of policyholders’ and ‘prejudice to policyholder’s interests’ (s 10(a)(ii) and s13(a)). The terminologies, used inconsistently across regulations, have not been provided definitional or functional frameworks within the regulations or linked with any other regulation. This conceptual limitation maps onto each aspect of InsurTech, creating a huge gap in terms of the consent acquired and obligations attached onto such companies.


The guidelines on Protection of Policy Holder’s Interest indicate this gap, on the first account of notice and consent, by failing to first guarantee transparency and disclosure in the process of insurance delivery beyond the descriptions about the product (as visible under sections 11 and 12), to the process of underwriting and calculation of risk and second, provide a bar until which personal information can be collected – by providing an absolute obligation to provide requested data on the customer under s 19(4). This textual limitation further presents material barriers in regulating the concerns of data mining through obligations of ‘purpose and storage limitation’ and black boxes and algorithmic malpractice through obligations of explainability.


In addition to these gaps in mandating fiduciary obligations, these regulations fail on two prominent grounds while guaranteeing protection of personal information collected by those handling an Insurance Self-Network Platform (ISNP) or insurance records on policies and claims managed insurers. These regulations, particularly on Maintenance of Insurance Records and Insurance E-Commerce, first, face with the same textual ambiguity as are the invocations of privacy – by requiring ‘necessary’ and ‘standard security frameworks without referencing ISO standards, and second, do not provide remedies of transparency on data breaches in order to account for consumer trust.


The eventual effect of these shortcomings is that they fail to highlight even minimally tangible rights against which the informational privacy can be realised, creating difficulty and confusion in the practical regulation of advancements of InsurTech by the IRDAI. These inadequacies point toward the need for synchronized revamp of sectoral regulations and the incorporation of solutions that respond to the particular risks that arise with InsurTech. However, more than anything, these also point us towards the need for going beyond a singular conception of how financial inclusion shall manifest.


Conclusion


The developments of technology present both positive and negative aspects, the consideration of which be essential in gaining the maximum utility out of these developments. Informational Privacy, can become an essential instrument in resolving the negative concerns at the present hour and ensuring real inclusion of the vulnerable.


This piece provided an analysis of the pertinent challenges of regulation of InsurTech due to Big Data, which inhibit their reach and ideals of ‘inclusion. These challenges create also create spaces where the concerns of both individual and insurers align, requiring adequate redressal through instruments such as Informational Privacy. However, this piece indicates that this aspect has not materialised under the present legislation – that are both conceptually and textually limited - requiring an adequate sectoral revamp that is attentive to the present challenges.



[1] K. S. Puttaswamy v. Union of India (2017) 10 SCC 1 (J. Kaul) [473] [2] Information Technology Act, 2000, Act No. 21 OF 2000 s 43, s 72A. See The Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011. However, the actors being governed do not include the state (and in extension, state-sponsored insurance like Ayushman Bharat) but only private entities.

110 views0 comments

Comments


bottom of page