top of page

Forward Yet Faulting: Decoding SEBI’s Straitjacketed Approach to the Artificial-Intelligence Genome

The authors are Dhruv Madan and Shauryavardhan Tomar, third year students at Jindhal Global Law School, Sonipat


Introduction


The proliferation of Artificial Intelligence (“AI”) and Machine Learning (“ML”) in the financial sector is poised to play a significant role in the transformation of private investments. The ATM: Global Industry Analysis, Trends, Market Size, and Forecasts up to 2025 predicts the global AI-ML trading market to compound at 11.8% annually.  Typically, this market trend displays the pressing need for a meticulously tailored regulatory framework to govern AI-ML transactions used by fiduciary entities operating in the Indian securities regime.  


Against this backdrop, the present article attempts to analyse the imposition of AI-ML liability on Market Infrastructure Institutions (“MIIs”), Registered Intermediaries (“RIs”), third parties or other fiduciary persons regulated by the Securities and Exchange Board of India (“SEBI”), as proposed in its Consultation Paper dated 13th November, 2024. This post further analyses SEBI's straightjacketed approach, highlighting the complexities of AI-ML tools used in Indian Markets. Additionally, it explores alternative regulatory strategies that could offer a more flexible and adaptive response to the dynamic nature of AI growth, to ensure a more nuanced and effective regulatory environment.


Position in Indian Law and Proposed Amendments


Currently, India neither has a central legislation to govern Artificial Intelligence, nor any effective regulatory framework for the use of AI-ML tools. However, in the past few years SEBI has issued three circulars (dated 4 January, 2019, 31 January, 2019, and 9 May, 2019) which cumulatively mandate reporting to SEBI and disclosure requirements to retail investors for their usage.


The 13th November consultation paper fulfils the pressing social need to dispense responsibility upon market institutions which use them for investment or business-related activities to be ‘solely responsible for all the consequences of such use’ for the protection of the interests of various stakeholders.


It solicits amendments to the Securities Contracts (Regulation) (Stock Exchanges and Clearing Corporations) Regulations, 2018, the Securities and Exchange Board of India (Depositories and Participants) Regulations, 2018, and the Securities and Exchange Board of India (Intermediaries) Regulations, 2008 to include two clauses – firstly, a definition clause which categorises any combination of software programs and executable systems for facilitation of investment or trading as AI-ML tools, and secondly, a liability clause that holds any person, stock exchange, and depository, which either designs or uses such combinations, solely responsible for any adverse outcomes with regards to privacy, security, and integrity of investors, including their data, their monetary outputs and compliance for applicable laws in force irrespective of the scale and scenario of the adoption of AI-ML tools.


Future Challenges to SEBI’s Approach


While the circular is a correct step for the formulation of a regulatory framework for the use of AI-ML tools, there are few critical elements that must be evaluated before consolidating the degree of liability on this matter. 


To begin with, the direction in which the liability accrues, encapsulates every person regulated by SEBI that uses such AI-ML tools while conducting its activities either directly in the securities market or indirectly for servicing clientele, regardless of the scope of involvement or the size of implementation of such mechanisms. It implies that such a person would be solely responsible for all the consequences of AI-ML mechanisms. This includes safeguarding the privacy and security of investors and stakeholders’ data, and any output generated by such techniques. While this investor and stakeholder-centric approach has its perks,  the consultation paper is rationed upon a uniform treatment of all AI-ML tools used by an entity. 


On the contrary, MIIs and RIs in India employ four different types of AI-ML tools for trading purposes - Quantitative, Algorithmic, High- Frequency and Automated, all of which involve different levels of risk and human involvement. For instance, quantitative trading utilizes price and volume data to identify and capitalize on the most profitable investment opportunities.  It analyses large datasets for traders to uncover patterns or trends that can inform trading decisions and requires a greater degree of human involvement. Algorithmic trading, on the other hand, employs predefined rules derived from historical data to execute trades automatically, thereby completely eradicating reliance on human execution. It is designed to optimize trading strategies based on empirical evidence, making it strictly systematic and data-driven without human intervention. Similarly, high-frequency trading represents a subset of algorithmic trading that is characterized by the rapid execution of multiple trades, often involving the buying and selling of large volumes of securities in fractions of a second. This technique leverages advanced algorithms and high-speed data networks to capitalize on minute price discrepancies that exist for mere moments in time. Lastly, automated trading builds upon quantitative trading and integrates technical analysis with computer algorithms to execute trades without any direct human intervention. It minimizes human involvement instead of complete eradication in the trading process. Therefore, the straitjacketed formula of classifying all AI-ML tools and their liability in the same stratum is problematic as it fails to consider these nuanced complexities based upon varying levels of risk and human intervention that each system requires.


In contrast to the foregoing, the recently adopted European Union Artificial Intelligence Act, 2024 considered as the world's first and only comprehensive regulatory framework for AI, classifies  these AI-ML systems into four separate categories depending upon the risks with varying degrees of human involvement:

  1. Unacceptable Risk [Art. 5]: which include prohibited AI-ML systems for social scoring, biometric categorization and specific real-time remote biometric identification systems.

  2. High Risk [Art. 6]: These may include bank credit scoring systems, or AI-operated insurance claims which usually require prior government approval.

  3. Limited Risk [Art. 52]: These include technologies, such as programs to assist in data management or organisation such as chatbots where developers and users have fewer compliance requirements but nonetheless, they are mandated to inform retail investors of their usage.

  4. Minimal Risk [Art. 69]: Most rudimentary applications fall within this category, such as spam filters and AI-enabled games, are unregulated, as they pose negligible harm.


The Act is often critiqued of its tedious compliance requirements, but this subtle classification ensures effective regulation and balances safety by attaching liability upon persons proportional to the risks involved.


If applied in the Indian context then quantitative AI tools would fall within the scope of limited risk within  high human involvement. Algorithmic AI tools would also fall within limited risk but with a lower degree of human involvement as they are merely an executory mechanism. On the contrary, high frequency and automated trading tools would fall within high risk as both analyse and execute transactions at massive intensities with no human involvement and accordingly would require government approvals. Accordingly, their liability and fines would be adjusted.


SEBI’s consultation paper reflects this social need with a ‘one size fits all’ approach, attaching a straitjacketed liability for the usage of all AI-ML tools. This demonstrates a significant gap in SEBI’s approach when compared to the EU AI Act, which adopts a nuanced, risk-categorized classification. The EU Act characterizes AI systems into unacceptable, high, limited, and minimal risk, tailoring compliance and liability based on the potential impact of the AI system to ensure effective regulation. Conversely, SEBI’s foundational step misplaces lighter obligations on systems with greater systemic risks, such as biometric data or credit assessments, posing a major threat to privacy, while treating benign tools such as algorithmic AI on the same stratum.


SEBI's idea of blanket imposition of liability on all AI usage disregards this developed categorization of AI used by fiduciary entities operating in the Indian securities market. The approach risks stifling complexities by failing to distinguish between low-risk tools, like AI-assisted chatbots, and comparatively higher-risk systems, such as automated AI tools. The phrase ‘irrespective of the scale and scenario of adoption’ implies that even minimal or experimental use of AI tools falls within the ambit of this regulation. This straitjacketed approach may discourage nascent players from experimenting with AI-driven solutions for low-risk activities, curbing innovation in sectors where such technologies could be transformative and aiding.


Although the provision specifies the use of tools ‘designed by it or procured from third-party providers’ it does not authorize the Board to certify or audit these tools as an alternative approach. A potential solution could be a proactive involvement of the Board in maintenance of technological standards, audits, or certifying AI-ML tools or designers for certain high-risk tasks such as trade-offs and options. While protecting investor interests, such a step could alleviate some of the burden on MIIs, RIs, and third parties to ensure consistency and proportionality.


Conclusion


SEBI's stance to regulate AI-ML tools reveals a structural constraint which is characterized by a straitjacketed imposition of liability upon market institutions. The approach’s fundamental flaw lies in its uniform, context-insensitive and limited understanding of AI, which fails to calibrate oversight according to the nuanced complexities of such systems. Inspired by the EU AI Act, SEBI could adopt a risk tiered framework that may transform its regulatory strategy and make it more adaptive and intelligent with respect to market protection but in the interest of balancing AI implementation.


It is undeniable that SEBI’s regulatory authority is limited to secondary legislation and such a paradigm requires the strong foundation of parliamentary laws. Nonetheless, a comprehensive and risk sensitive regulatory framework would facilitate proportional liability and proper compliance. A more refined understanding of systemic risks and liability must be attached with AI-ML tools to foster an environment that safeguards market integrity while supporting the growth of AI in its financial institutions.

135 views0 comments

Comments


Join our mailing list here:

Thanks for submitting!

Contact Us

 

Centre for Business and Financial Laws, 

National Law University, Delhi, Sector 14 Dwarka, Delhi - 110078

For blog related query mail us at cbfl.blog@nludelhi.ac.in

For any other query mail us at cbfl@nludelhi.ac.in

© 2025 by CBFL

bottom of page