IIT Hyderabad researchers break down working of AI programs
Team Careers360 | September 9, 2019 | 03:37 PM IST | 2 mins read
NEW DELHI, SEPTEMBER 9: Indian Institute of Technology Hyderabad researchers have developed a method by which the inner workings of Artificial Intelligence models can be understood in terms of causal attributes.
‘Artificial Neural Networks’ (ANN) are AI models and programs that mimic the working of the human brain so that machines can learn to make decisions in a more human-like manner. Modern ANNs, often also called Deep Learning (DL), have increased tremendously in complexity such that machines can train themselves to process and learn from data that has been supplied to them as input, and almost match human performance in many tasks. However, how they arrive at decisions is unknown, making them less useful when the reason for decisions is necessary.
This work has been performed by Dr. Vineeth N. Balasubramanian, Associate Professor, Department of Computer Science and Engineering, IIT Hyderabad, and his students Aditya Chattopadhyay, Piyushi Manupriya, and Anirban Sarkar. Their work has recently been published in the Proceedings of 36th International Conference on Machine Learning, considered worldwide to be one of the highest-rated conferences in the area of Artificial Intelligence and Machine Learning.
Speaking about the research, Dr. Vineeth Balasubramanian said, “The simplest applications that we know of Deep Learning (DL) is in machine translation, speech recognition or face detection. It enables voice-based control in consumer devices such as phones, tablets, television sets and hands-free speakers. New algorithms are being used in a variety of disciplines including engineering, finance, artificial perception and control and simulation. Much as the achievements have wowed everyone, there are challenges to be met.”
The DL algorithms are trained on a limited amount of data that are most often different from real-world data. Furthermore, human error during training and unnecessary correlations in data can result in errors that must be corrected, which becomes hard. “If treated as black boxes, there is no way of knowing whether the model actually learned a concept or a high accuracy was just fortuitous,” added Dr. Vineeth Balasubramanian.
Explaining this area of work, Dr. Balasubramanian said, “Thanks to our students’ efforts and hard work, we have proposed a new method to compute the Average Causal Effect of an input neuron on an output neuron. It is important to understand which input parameter is ‘causally’ responsible for a given output; for example in the field of medicine, how does one know which patient attribute was causally responsible for the heart attack? Our (IIT Hyderabad researchers’) method provides a tool to analyze such causal effects.”
Follow us for the latest education news on colleges and universities, admission, courses, exams, research, education policies, study abroad and more..
To get in touch, write to us at news@careers360.com.
Featured News
]- How randomised controlled trials hollowed out Indian education
- Galgotias University: 2,297 patents filed, just 1% granted; with 63%, IITs far ahead of private institutes
- Samajwadi Party calls Galgotias University’s robot dog display ‘mockery of UP’, says ‘cancel recognition’
- CBSE: APAAR ID must for LOC registration from 2026-27 session; two-level Class 10 exams from 2028
- Less bias, more risk? CBSE on-screen marking system leaves Class 12 students, teachers cautious but optimistic
- CBSE Plans: Compulsory computing, AI in Classes 9, 10 syllabus; more skill subjects; 25% EWS quota review
- CBSE 2026: Board tightens rules on cheating, makes it harder to pass; Class 10 gets new marksheets
- NEET PG Counselling: Maharashtra body orders medical college to admit student it refused over fees
- Anna University engineering colleges sack over 300 temp teachers; defiance of court orders, says association
- ChatGPT for education? IIT Madras director on how Bodhan AI will work and what it can do