Now Reading
Are Some IITs Over-Pampered and Underperforming?

Are Some IITs Over-Pampered and Underperforming?

Photo: IIT Indore.

This article follows from a previous one published on June 14.

The fifth edition of the National Institutional Ranking Framework (NIRF) was released online June 11. This year, seven of the top 10 places have gone to the IITs. In all five instalments, the old and preeminent IITs have been at the top spots.

On June 14, I had pointed out that the NIRF methodology had an important flaw. It builds a single score from five categories: teaching, learning and resources (TLR), research and professional practices (RPC), graduation outcomes (GO), outreach and inclusivity (OI), and perception. These five broad heads are built up from various sub-heads, and a complex weighting and addition scheme is used to obtain the overall rating score, which can take a maximum value of 100. The institutions are finally rank-ordered based on these scores.

The flaw is that output and input scores are added to obtained the final score instead of being divided. Any performance analysis requires identifying an input score and a quality score based on the ratio of output (RPC and GO being appropriate proxies) to input (TLR). Instead, NIRF adds the inputs to the outputs, i.e. TLR, RPC and GO along with OI and perception, to get the final score. Note in addition that OI and perception relate neither to academic excellence nor research excellence, but these are added as well.

P. Sriram of IIT Madras recently pointed out in a personal communication that the NIRF uses performance parameters in several groups, including TLR, publications and graduation outcomes as reflecting “desirable” and “undesirable” traits, where the desirables get positive and high scores and undesirables get low scores. From this viewpoint, the TLR parameter receives a high score for a high faculty-student ratio, high spending on infrastructure, high PhD enrolment etc. NIRF adds up all the scores – but completely overlooks the systems’ paradigm. That is, from a systems viewpoint, the ‘best’ institute should be based on the ratio of outputs to inputs.

Now, nowhere in the NIRF portal can a measure be found for the size-dependent input nor a size-dependent proxy for the output, both vis-à-vis academic excellence. The systems approach allows one to compute quality as output score divided by input score. Professor Sriram suggested that capital expenditure can be a meaningful input measure, but a lot of experimenting showed that the major chunk of expenditure is committed to salaries of faculty members and non-teaching staff, and that it may be better to use ‘total expenditure’ as a proxy for the input.

Keeping this in mind, I computed the total expenditure (sum of capital and operating expenditures) as a proxy for input and the sum of RPC and GO scores as the output scores for the top 100 engineering institutes (in the current form of the NIRF rankings). Then the cost effectiveness of each institution becomes simply the ratio of output to input.

The table and figure below summarise what we found. IIT Indore and Jadavpur University stand out on this basis. Many National Institutes of Technology get into the top positions. And most of the preeminent IITs vanish from this list, as the law of diminishing returns comes to the fore. So perhaps we must ask if some of the IITs are over-pampered and underperforming.

This argument can be carried over to other ranking lists as well.

Gangan Prathap is an aeronautical engineer and former scientist at the National Aeronautical Laboratory, Bangalore and former VC of Cochin University of Science and Technology. He is currently a professor at the A.P.J. Abdul Kalam Technological University, Thiruvananthapuram.

Scroll To Top