Sebi wants to keep a closer eye on bots. With a discussion paper, 'Guidelines for Responsible Usage of AI/ML in the Indian Securities Market', it has kicked off consultations on how to rein in the rise of machines on Dalal Street. The move is timely, with AI having acquired the capability to analyse data and execute trades on a scale not humanly possible.
AI does not introduce new risks at a fundamental level into markets but can exacerbate existing risks unless regulated. Sebi's approach is to attempt this through rules about models, testing, disclosure, bias and privacy. Ideally, holistic AI regulation should cover all these elements, reducing the need for separate frameworks for financial markets, yet some areas require specific intervention.
The discussion paper has gone into most of these: governance, risk management and investor protection. The scope of regulation will have to be widened to cover non-traditional players and third-party relationships.
Generative AI holds the promise of democratising wealth creation. AI models can be trained to process financial and non-financial info at a fraction of the cost of proprietary trading systems. This must be accomplished with greater accountability and oversight, with safeguards against the tech's shortcomings.
Eventually, investing can move into agentic AI territory, and the process must bake in sufficient human oversight. Those rules ought to derive from the scaffolding Sebi is proposing to put in place now.
Sebi has outlined high-level principles on AI regulation to put it in the context of its broader oversight of the securities market. The early-mover advantage in this approach could spur investments in AI that may become constrained by rules in the future.
The pace of AI evolution and diffusion makes it difficult to regulate, and Sebi has taken a neutral approach while setting out its intent. This is a pragmatic way to reach regulatory outcomes regardless of the technologies the securities market adopts.
AI does not introduce new risks at a fundamental level into markets but can exacerbate existing risks unless regulated. Sebi's approach is to attempt this through rules about models, testing, disclosure, bias and privacy. Ideally, holistic AI regulation should cover all these elements, reducing the need for separate frameworks for financial markets, yet some areas require specific intervention.
The discussion paper has gone into most of these: governance, risk management and investor protection. The scope of regulation will have to be widened to cover non-traditional players and third-party relationships.
Generative AI holds the promise of democratising wealth creation. AI models can be trained to process financial and non-financial info at a fraction of the cost of proprietary trading systems. This must be accomplished with greater accountability and oversight, with safeguards against the tech's shortcomings.
Eventually, investing can move into agentic AI territory, and the process must bake in sufficient human oversight. Those rules ought to derive from the scaffolding Sebi is proposing to put in place now.
Sebi has outlined high-level principles on AI regulation to put it in the context of its broader oversight of the securities market. The early-mover advantage in this approach could spur investments in AI that may become constrained by rules in the future.
The pace of AI evolution and diffusion makes it difficult to regulate, and Sebi has taken a neutral approach while setting out its intent. This is a pragmatic way to reach regulatory outcomes regardless of the technologies the securities market adopts.
You may also like
Fox News host delivers breaking update in devastating blow to Donald Trump
Love Island fans 'work out' brutal 'recoupling twist' - and it's bad news for two Islanders
'3 Lashkar-e-Taiba ultras from Pakistan carried out Pahalgam attack'
Love Island fans accuse Helena of hiding true motives as her 'insecurity' shows
Legendary BBC presenter Steve Ryder quietly bows out after almost 50 years of TV hosting