“Think of fundamental AI models as well beyond”

NEW DELHI : As Vice President of IBM Research AI, Sriram Raghavan leads the American company’s artificial intelligence (AI) research labs. Until recently he was Director of the IBM Research Lab in India and the Research Center in Singapore. In an interview, he shares IBM’s AI strategy, his thoughts on how to achieve a return on investment (ROI) with AI, the impact of advances in quantum computing on AI and why AI needs an ethical framework. Edited excerpts:

Why are CXOs of many companies around the world and also in India that have adopted AI still struggling with ROI?

The use cases are well understood. So if you can get the AI ​​model up and running and build it with the right investment, the business impact is clear. But do I take six months? Do I still need 300 people to maintain the model? These are the ROI questions they (CXOs) struggle with. Because of this, we’re very excited about Foundation Models (big models like Generative Pre-Trained Transformer 3 or GPT 3), but we think about them well beyond big language models. At the core of the base models is the following idea: Can I train a model to create a representation without human supervision – self-supervised? If so, I’m only limited by the computing power and infrastructure to process all that data.

Imagine I had to complete 20 NLP (Natural Language Programming) tasks, including answering questions, sentiment analysis, and extracting. The traditional approach to addressing this has been to not collect and process all your data in one model.

With Foundation Models, you are not limited by label data (like “cat” or “dog”) because your model can be trained without them. I also don’t have to start with raw data every time, so 20 AI models can be built with the same data set. So I pay the cost of data curation once versus 20 times (hence better ROI). The challenge, however, is that you must have the skills and computing power to train these large AI models.

IBM has spoken about NLP, AI Automation, Advanced AI, Scaling AI and Trust AI as part of its overall AI approach. What do these terms mean for companies?

The focus on NLP and trust at its core is a recognition that there is a science to creating trustworthy AI. Then there is the operationalization of trust. In a corporate context, this may involve NLP to build conversational systems. NLP also allows us to gain insights that help us with IT automation. There is also AI automation – the application of AI to business and IT automation.

Lots of people need to build AI models. How do we empower them to build the right products easier and faster?

This is sometimes referred to as Scaling AI. All of this is underpinned by the fact that we continue to view our AI and hybrid cloud strategy as tightly coupled because we always build AI to run where the data resides.

With the tremendous strides that AI has made over the last few years, do you think we’ve reached the stage where any breakthrough in AI to become sentient is possible?

AI is clearly not sentient. We’ve continued to improve our ability for pattern recognition and representation, as well as intelligent representation at scale, with these more recent advances – we not only do prediction and classification, we also generate, but we’re still data-driven. Data representations have become more powerful today because they learn from data. But we’re a far cry from anything called sentient in AI.

Give us some examples of how AI is being automated.

The cross-industry, cross-geographic use case that is easiest for people to start with is Conversation in AI for customer interaction. The second is the application of AI to IT automation. The third is process automation or workflow automation or business automation.

We’re also seeing a shift from task automation to task orchestration. Can AI move beyond task automation to tasks – how to do a credit check; How is citizenship verified? Can it put the flow together when you know you want to make it happen? That’s the vision behind Orchestrator, and it will expand the scope of automation.

Work around network automation (gaining traction) as telcos increasingly roll out new networks, 5G, etc., which is why they need more and more AI technology. For example, the research lab in India has been instrumental in some of the work we’ve done globally with this network, and there are 5G operations where they wanted to use AI to do automated resource allocation for 5G slicing network into multiple virtual circuits that can be tailored to the traffic needs of different usage), etc. I also see a great opportunity for more and more AI to show itself in sustainability, which is why IBM Research invests so much in our work with our business units to publish the Environmental Intelligence Suite.

Get all company news and updates on Live Mint. Download the Mint News app for daily market updates and live economic news.

More less

Subscribe to something Mint newsletter

* Enter a valid email address

* Thank you for subscribing to our newsletter.

Post your comment

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée.