Understanding Utility Data with Actionable Intelligence
Democratization, decarbonization, and decentralization are driving change within the utility marketplace. Customer expectations and technology are guiding grid operators into a new direction related to how to manage their data as well as assets. We are seeing utilities start to challenge the old model of “Run-to-Failure” regarding critical equipment. The transition to a more sustainable grid and the vast amounts of data now available from both devices and utility IT systems, require new IT Solutions. Onesait Utilities Intelligence is an analytics platform that can future-proof utility data analytics processes and improve the utility’s agility to navigate these changes.
Presented by: Giovanni Polizzi, VP Sales & Marketing, Minsait ACS
Download the Slides | Watch the Presentation
Intro: This presentation will explain how data that is already available, in the utility, can improve the business of the organization.
Utility Data Analytics for Grid Assets and Grid Operation
We are talking about technical data analytics today along with use cases on Onesait Utilities Intelligence. Today, we will see clustering analysis, load forecasting analysis, hourly overload of distribution, anomalies, reviews, and non-technical losses analysis. We can locate and identify performance deviations and exceptions that power flows can show during operations. We can determine patterns of demand profiles and aggregate different network points. So, from the meters going up to system, we can analyze the performance of grid assets and of processes that are associated with grid operations.
Collection and Processing of Utility Data with Onesait Utilities Intelligence
Onesait Utilities Intelligence performs four different functionalities. One is the collection and processing of data, so real-time data and data pre-processing. It has got an engine and we can stream data in the system and analyze it directly - for example, comparing against thresholds, compare with sudden increment or decrement of one of the measures. We can take digital signals, as well as analog signals. We are talking about collection and processing of data. We are talking about operating efficiency. We are trying to get as close to real-time as possible, to immediately notice or to be immediately alerted that something is not as expected.
The second piece is storage. Of course, we store data, real-time and historical database, so we can import data from existing stored data. And that is especially useful for the next step, which is the analysis.
So, as a data scientist, where Leonor plays a role, she analyzes, she runs algorithms, she trains algorithms from past data to forecast what is going to happen in the future. It is a multi-language environment, so there are different possibilities to write code.
And then there is the business vision. That is a set of dashboards, reports, and notifications through texting, for example, through emails or through web pages. So, we are going from operating efficiency to business profitability. We are trying to forecast things before they occur and to act before things occur.
We represent here the data life cycle, everything that is within that bright blue frame. This is the system we are going to talk about today.. You see here the different components that make up the platform, called Onesait Utilities Intelligence, which are the collection part, the reaction part - where we do data pre-processing, the organization part - this is the storage that we saw before. We call learn the part of analytics - where we learn things about the business, and then when we improve things by visualizing the effects. We don't expect that everything that is analyzed in the platform has an immediate interaction with the user. We don't expect a user to be constantly looking at this, but he must be notified when things are not as expected.
And then the administration piece that we see here, a very important part, because you may find similar platforms out in the market, and you will probably have all these different elements as separate elements integrated somehow. We have managed to wrap everything inside one single application, called Onesait Utilities Intelligence. The administration tools allow administration of security, user profiles, access data, and configuration of every single component of the platform. It is very easy to learn, very easy to maintain from an IT point of view or from a user point of view.
Feeder Data, Weather Data, Demand Data
We talked about data, so which data? This is a simple representation of what we mean by data. On the top side here we represent, for example, everything that is technical, the substation feeder data coming from SCADA, breakers, regulators, or IoT devices that you already have in the network. There are already several IoT devices. For example, they are measuring voltage around the network and measuring power factors, or power quality in general. We have weather data, so historic and of course forecast data. We have demand data associated with MV90, AMI, PV and storage data coming from customers and from commercial industrial customers premises. We represent everything that's going within Onesait Utilities Intelligence and any department in the company can be provided with this information - data analytics, engineering and planning, data visualization, knowledge and sharing.
Training Utility Staff on The Effects in The Network
For example, to train people on effects in the network, to create alerts, to generate workflows, to provide information to groups that are working together to solve a problem or to address an identified issue in the network. All these participants become important when we deliver the information. This means that when we talk to clients, sometimes we talk to the SCADA department and also other people, normally other departments are involved in the use of Onesait Utilities Intelligence. It's longer to capture the value of such a platform within an organization, but the value it delivers is so much more when more than one department has a view of what this application can do. Let me jump to the use cases..
Use Cases - Leveraging SCADA Data for Better Operational Insights
So, let us see the use cases. Some of you have seen our demonstration of Onesait Utilities Intelligence, and you've seen how we are leveraging SCADA data to improve operations and to have better operational insight. We're taking data to analyze how substation transformers are doing, to notify about overloads, hourly load, understanding the fleet load across the entire number of transformers that are in the network. We also have a more traditional way of looking at voltage profiles, currents, power factor, bus voltage across the network. You have seen this aggregated information coming from a system level, in terms of kWs, down to circuit level in terms of voltage, current, or power factor. This is one of the use cases, probably the most immediate. It is like a very evolved historian, but also very important information in terms of aggregation of information.
Live Data Over Circuit Diagrams
The next use case that we're going to see is something that is very much related with SCADA, when you want to provide this information outside the SCADA network. For example, a G&T may want to provide this information to their members, in terms of the circuits fed by the G&T organization. This case here is quite simple but it’s an indication of what can be shown. This is a substation with the different circuits coming out of the substation, so each one is provided real-time information. Something different is here. It is representation of a switch yard in the power station. Again, if you have people in the field that need this information in real-time, that is a particularly good way of showing this information through a tablet. Bear in mind, all the symbols you see can be configured and customized. If you have several symbols that your organization uses, then this could be a way of normalizing the representation. You create a library of symbols and then those symbols are representing all your diagrams. So, any circuit can be represented this way. It's a very easy way of representing SCADA displays outside the SCADA network to provide information to your customers, some of your commercial industrial customer on one or two substation providing information.. his could be an effective way of showing that information to them.
AMI Data Analysis Dashboards
The other one I am going to show you is two different dashboards related with the analysis of AMI data. One is the customer load analysis, and the other one is the transformer load analysis. Customer load analysis takes into consideration the analysis of loads at the customer level. Then when we start to aggregate the data from customers, we start to analyze the service transformers. Normally transformers that are not sensorized, provide little information. So, we can start to provide information how those transformers are being loaded. And it becomes important when we start to analyze the new loads that are coming into network related with the charge of electric vehicles.
Voltage and PF Variation
So, let us see a bit of in detail what we can do with these dashboards… one of them is voltage and power factor variation. We have information coming from residential smart meters and we can try to analyze the quality of power delivery. We can represent, by aggregating different information from different meters, close to substation, and far from the substation and somewhere in between, we can have the voltage profile of a feeder. We can have a feeder head, then we can have a mid-feeder head and the end of the feeder. We can see if these three lines are close to each other when they very separate from each other, through a decay of voltage along the feeder. We can see how many times we have voltage violations and spot the anomalies and when they occur. During the day, we may see that there are more frequent voltage violations in one section of the feeder, or more frequent late at night or in the middle of the afternoon. That is something that allows a utility to take measures by placing a storage device along the feeder, or a load type changer to counteract voltage drops along the feeder
Power Factor Monitoring
Another case is the power factor monitoring. By monitoring power factor, we can see whenever it goes down. In this case, we indicated at 0.9x, depending on which is the threshold that you want to consider, but that's also another way we can study and understand how the feeders are doing in the network.
Load Analysis Dashboards
What we are seeing here is load analysis. Suppose that we are seeing load at substation or down at customer level. We have consumption compared to similar periods of the past, so we can see load that is slowly increasing along the feeder. That could mean there are more appliances or some electric vehicles that our customers have been deploying over the years along the feeder. We can see how much energy has been used during peak hours and off-peak hours, the distribution of loads in 24 hours. When we can see the different blocks, how different blocks build up the entire consumption in a day becomes important to understand when there is the highest demand at the system level, substation level or feeder level. Then we can compare demand versus temperature - how temperature influences the load of a system or a particular substation.
In this case, we are aggregating to show the service transformer, so we are not taking the raw data from meters analyzing meter by meter, but we are aggregating at a service transformer level. Again, overload analysis, whenever the load of a transformer goes above a hundred percent or above 90%, the load distribution during the day, when the transformer mostly loaded during the 24 hours of the day, or the day of the week when the transformer is mostly loaded, hourly overload distribution during the 24 hours of the day. We can review the historic load – number of times that transformer has been longer than three-time intervals above 90% of its load. All these calculations can be run in the system. The algorithm is already there, but they can also be customized. Maybe you are not interested in the three continuous time intervals but are interested in five continuous time intervals. That is a parameter that we can change in the algorithms.
This becomes important when you are analyzing time of use rates and the effectiveness of time of use rates. So, the algorithm runs clustering analysis across the entire population of meters and spots different consumption profiles. In a pilot that we did, over 20,000 meters, using two years of data, we found 10 different voltage, 20 different customer profiles, which are potentially 10 different time of use rates or 10 different commercial offers that can be done to customer to basically distribute those peak loads along the 24 hours of the day, and trying to avoid hitting peak in a certain period.
Especially important - By taking customer metering data and forecast information of weather measurements, we can forecast 24-hours to a week ahead of hourly load at the customer level, then aggregating up to the system level. The more precise would be the forecast that we can do. At this moment we are analyzing up to five different variables. We are doing temperature, humidity, wind speed, wind direction, and we are also using period of the year/day of the week. All these variables have influence on the overload of the system. We are trying to understand the influence of each one of them. This is done by the forecast analysis module that you see here. Once the forecast is done, we have an internal system to control the error. And of course, if we see that this error is constantly increasing, then we can retrain the algorithm to reduce the error compared to the real measurements Having the hourly load 24 hours ahead is valuable information for the control center to organize switching. If you use that information together with your most loaded feeder, then you will divert part of the energy flow into other feeders that are less loaded than the very loaded ones. All this information can be provided 24 hours to one week ahead. The further away we go in time, of course, the less precise the forecast will be, and as the date gets closer it will be more precise.
Asset Technical Data Module
This is the asset technical data module. What we are trying to represent here is how data is organized. We have templates for various kinds of assets. We can generate an asset tree, with all the distinct levels, from system to substation to transformer bank, feeder, circuits, etc., we can go down to the different ID devices along the network. If you want to represent a component that has a data feed that you are interested in, the application can be represented. Once it is here, we can create templates. Each time you will create a new substation, you will know already there is several data points that will be taken from SCADA , or each time you create a new transformer bank, there are several data points that we will be coming from SCADA or other systems. This also gives you the possibility to add technical data, for example, the make and model, or the year installation of a transformer. That allows the analysis of the data by using the technical characteristics.
Live Data from Power Plants
Another window that I really like is this live data from power plants. In this case, some of you are distributors, some of you are in the generation part. If you want to use the same system to take data from the grid, as well as from a power plant, this is the right system for you. Here we are representing a hydro electro plant. We are taking real data coming directly from the field. Some other data will be calculated. We can represent all the different measurements that we want, for example, temperature, currents, voltages. This is not only for your SCADA data, for your typical feeder information, but also for your storage. If you have batteries that you are using, if you have solar farms that are connected to your network, this could be a way of understanding how those are performing, and a way to show to other parts of the company which do not have access to the OT network, how the system is performing.
The last one is live data, again, from power plants. In this case we are seeing information coming directly from a wind farm. This is a more general view of a solar farm and how the solar farm has been performing in certain period.
Data Acquisition Module
I wanted to also show the data acquisition module. We are going a bit technical here, but the idea is the data acquisition model, which is the first part that we were seeing in the diagram before is where data is coming in from the field. We already have several industrial protocols implemented. We have IoT protocols here. We have direct connections with relational databases. We have file format data, which are already available. Whenever we are creating a technical database of technical characteristic assets, we can take that information from your asset management system. We can take coordinates from your GIS system. We can take data directly from other databases or directly from the field, through Modbus. There is really a plethora of possibility to import data automatically into this system. It is not only a flat file interface, but also a very dynamic and real-time interface. The configuration of the data acquisition model is all graphical. There is not a need to program. It is drag and drop. What we are doing is configuring imports, but also configuring controls and quality checks of data before the data is stored in the system.
Value of Onesait Utilities Intelligence
Briefly going into the value. There are several accelerators, in terms of data collection, data processing. Portability - we can interchange databases. We can contextualize data, so if data comes from SCADA , it is not only SCADA data, but it can also be used for maintenance, planning, engineering, and troubleshooting. Once data comes in, that is where we contextualize data. We put data inside different content and domains, so that other systems, or the same system, different work groups can work together. And then there is the data visualization part. We are used to seeing data in spreadsheets, or the classical table. Rather than downloading and working with Excel, think about having dashboards, which are dynamically adaptable. You can scroll back and forth in terms of time frame. You can overlap one to the other to compare different magnitudes and different signals. All these things are the accelerator that we see in Onesait Utilities Intelligence.
Intelligence Quick Start – From PoV to Gradual Roll-Out
Here we are representing a project which can take proof of value to understand use cases in which you are interested. You can see the ones that are available out of the box, and you can create new ones. Between one and four months, I would say around four months, we can start to have data coming in, and import the historical data that you have, to show you information or reorganize information you already have stored. Then, once you are familiar with the use case, we can ramp up. We can start to create automatic data feeds from systems. And then, in the next 5 to 12 months, we can ramp up and have more data coming in, and more use cases. We can increment and improve the data at the analytic models. Then the consolidation part, that is when you start to change processes in the company, because you are seeing the effects, the value of the data and how things can be steered so the process improves.