When Nikola Tesla, in the early 1880s, conceived of the first AC induction motor, he showed remarkable foresight in predicting the incredible versatility of a power distribution system based on the principle of alternating current. The AC power grid that he thus helped develop, has undergone many major upheavals since then, with technology always striving to maximise efficiency and responsiveness, while minimising losses and expenses.
Perhaps the transformation with the potential to be the most far-reaching, is the most recent one – the idea of a smart grid. Researchers at IIT Madras led by Prof. Sivalingam, working with IBM Research, have proposed new protocols and algorithms that exploit the capabilities of the smart grid to make it more efficient and reliable.
Akin to transformations inspired by artificial intelligence underway in every sector today, the power grid is now conceived not as a static, unidirectional entity, but as a dynamic, bidirectional entity that facilitates increasing interaction between demand and supply. These modernised versions of the power grids of yesteryear, called smart grids, are a giant leap forward in the sphere of intelligent management of vital resources. They are designed to recognize and react in a highly selfsufficient manner to a host of data that the hardware in the system provides, such as power usage statistics, loading variations – variations in how many devices are connected to the grid – and supply constraints.
The possibilities opened up by having such a system are seemingly endless. Smart meters will become commonplace. These meters will show not just the total power consumed but also break it down into how much was consumed in different time frames – which are set depending on power supply/demand fluctuations and can be specific hours, days or weeks – and which socket consumed how much, and so on. Power production at high demand requires greater manpower and resources to ensure that the excessive demand does not cause grid breakdown. Based on the data from the meters, the power companies will be able to provide variable tariffs, depending on when in the day, month, or year the demand is high.
Also, power plants are currently always switched on, since it is not possible to predict when demand will rise. This leads to unnecessary wastage of limited resources such as coal. The companies, by analyzing past power usage patterns, would be able to decide when it would be safe to switch off their generators.
The consumers, on the other hand, will have complete control on how they should adjust their power consumption patterns to minimize expenses. For example, households could run washing machines and other power-hungry equipment during that time of the day when the tariff is set low. Power Grid Chairman and Managing Director R. N. Nayak estimates that the deployment of the smart grid in India would bring about 15-20 percent energy savings overall, which can make an enormous amount of difference, given how large a portion of the country still languishes in the dark with blackouts and brownouts.
The IEEE Standards Association ranks India as the third largest market for smart grid investments.
The social benefits of implementing the smart grid are enormous. India has prodigious potential in solar energy and it is now possible for villages to take up solar power production on their own.With the deployment of the smart grid, residents of these villages can even become producers by selling the excess power they generate. If their power production falls short, they can always draw power from the grid. It is this versatility of the smart grid that led various state power corporations in India to incubate smart grid projects, with a total of over a dozen projects currently sanctioned.
Globally, the smart grid and allied research areas have expansive scope and have been taken up by top universities. At IIT Madras, Prof. Krishna Sivalingam, of the Computer Science and Engineering department, and his team of three post-graduate students – Kavin, Dhananjay and Karthick – took up challenges related to smart grids in collaboration with IBM Research India Labs.
The primary objective of a smart grid’s deployment is that it must provide increased efficiency and reliability in the delivery of electricity at a reasonable cost in an automated and sustainable manner. The complete requirements are specified as Quality of Service (QoS) parameters, which are derived from the grid’s properties. At any given point in time, the grid will have a large number of properties using which it can be characterized. For example, on the supply side, the grid will be operating at a specific frequency, generating a specific amount of power from a specific number of centers. Similarly, on the demand side of the network, consumers will be connecting different loads – household appliances or commercial machines – to the grid at different power factors. (The power factor is a measure of what percentage of the supplied power is consumed by the load.) All this data must be collected, stored and analyzed to provide an efficient distribution scheme. This is made possible by deploying advanced Wide Area Monitoring Network systems, which was the major theme of the team’s research.
These systems rely extensively on data generated by Phasor Measurement Units (PMUs), or Synchrophasors. All electrical variables such as power, voltage and current, are commonly represented as phasors, which are vectors in the Cartesian coordinates rotating at some frequency. The frequency of the voltage phasor is the same as the frequency of the electricity that we receive in our homes (50 hertz in India). Similarly, the phasor data contains further properties of power generation which can be used for extensive analysis. The team’s first paper on the topic was submitted in 2012, in which they proposed a network architecture to better process the data generated by the PMUs. The Internet, which we are so used to now, is an example of a network – an immensely interconnected web of nodes, each representing either a supplier or a consumer. The network requires basic underlying protocols that define its configurations and specify how it is to be used to function coherently. This set of protocols is called a network architecture. For a smart grid, North American SynchroPhasor Initiative, or NASPI, specifies a framework which architectures should adhere to in order to support all the functions of a smart grid.
The Internet functions by interaction between one source and one destination facilitated by a network. This paradigm, called unicast, is a physical property of the network, and its data-handling devices, called routers, but the overlying applications can be modified to provide a multicast; that is, make available the same data to multiple destinations. For example, Facebook is an overlying software application programmed to mimic a multicast on the underlying network.
Similarly, publish-subscribe paradigms (shown in the figure), where multiple destinations request access from multiple publishers of data, also can be developed on overlays; that is, using overlying applications. This is how RSS feeds, which have become highly popular, function. Current smart grid architectures also utilize the publish-subscribe paradigm, with the PMUs publishing data which can be requested by subscribers, but using multicast overlays. This means that the underlying network still functions physically on unicast. Prof. Sivalingam’s team proposed that the underlying network and routers themselves be modified to provide multicast support, which they proved would improve QoS parameters by reducing delay in data transmission.
A second piece of research by the team focused on algorithms that ensured the reliability of the data sent over the network. This is vital because some of the data sent over the network finds applications in sensitive areas such as feedback control – regulating power production based on feedback – and state estimation, where the system’s current state is analyzed. This data is highly critical and its loss or errors in its actual values can have far-reaching consequences. For example, underestimating the frequency of the produced power can lead to the system increasing the frequency at the production plant above acceptable limits, which in turn could adversely affect the functioning of devices connected by the consumers.
Over the Internet, reliability is typically ensured by sending the same data packets over multiple paths. This method, although highly reliable, takes up a lot of bandwidth. The team proposed three algorithms which utilized the resources better. The number of paths available on the network, chosen on the basis of accuracy requirements and network constraints, is denoted by N. The first method, called Single Link Fault Tolerance (SLFT), divides one packet of data into N-1 sub-packets and then creates a parity packet, whose components are used to check for errors in transmissions. Here, the component is obtained by performing an exclusive-OR function – which is a logical operation that outputs true only when exactly one of the two inputs is true – over all the sub-packets. The parity packet itself is N – 1 times smaller than the original packet, so this algorithm reduces the bandwidth required by a factor of N – 1. As a downside, this algorithm works only for large packets of data. Also, it is resilient to failure only on one of the N links (hence the name) and can fail if more than one link fails.
The second algorithm, called Double Link Fault Tolerance (DLFT), is an extension of SLFT. Here, the packet is divided into N sub-packets and instead of one, there are two parity packets. These sub packets are sent over N paths, with three of these sent over the best path. This algorithm reduces bandwidth requirement by a factor of N/3, which is acceptable for N > 3. A drawback to this method is that it introduces differential delay – the best-path packets arrive at the receiver at a different time than the others. On the other hand, it fails only if the number of failed links is greater than two.
The third algorithm proposed by the team, named Network Coding for Monitoring Grid, presents a slightly more complex method using publishers and routers (refer to figure). Dr. Sivalingam’s team found that the algorithm is resilient to only single link failures, but is more efficient and easier to implement than both SLFT and DLFT. Overall, bandwidth requirement was shown to be reduced by 30-82 percent and data reliability was increased in some cases by up to 100 percent.
The third paper that the team submitted dealt with developing an effective method of combining two simulation frameworks required to computationally analyze a smart grid – the power system simulation and the communication network simulation.
The power system represents the generation and consumption part of the grid, where electrical parameters are determined. The communication network simulates the nodes through which these parameters are passed, depending on where and when they are needed.
Both the power system and the communication network simulators work well individually, but require synchronization to prevent errors in combined functioning because they differ fundamentally in their data processing – the power system works in the continuous time domain while the communication network works in the discrete time domain. The power system’s data plots a continuous curve on a graph while the communication network’s data is discontinuous, since it has data only at discrete points in time. Hence, effective data exchange between the two, which is vital to the grid’s functioning, can only be achieved by synchronizing time.
To this effect, the team proposed a co-simulation framework for the leading software simulation programs used in communication and power systems, Open Distribution System Simulator (OpenDSS) and OMNet++ respectively. They then implemented the same by developing a software based on the framework. The software is openly available on Sourceforge.
This project by Prof. Sivalingam’s team is symbolic of how science works – small, definite steps of progress are made by researchers in the hope that their work will be carried forward, just like they carry forward someone else’s work. The smart grid initiative can still only be seen as a glass less than half full, but the few drops that Prof. Sivalingam’s group has added will go a long way in helping fill up the vessel.