PetroVisor PI Data Integration
PI SDK Integration - Web Activity
Data Source - Data Integration
Mapping Source - Entity Import
Data Processing
Introduction
What does PI stand for (Source)?
‘PI used to stand for Plant Information when, in our early days, we were predominantly used as a data historian within plant operations. However, we’ve grown very far since then, and have expanded across many different industries, so these days PI doesn’t stand for anything other than the brand name of our data infrastructure.’ – OSIsoft, acquired by AVEVA in 2020.
The PI System collects, stores, and manages data from plant or process. The data sources are connected to one or more PI Interface nodes. The interface nodes get the data from these data sources and send it to the Data Archive. Data is stored in the Data Archive and is accessible in the office defined in Asset Framework (AF) or directly from the data archive depending on version of system in place.
Figure 1: PI Infrastructure (source)
Note: Data sources in this context are not the ‘PetroVisor’ definition of a data source, but hardware collecting data from sensors (e.g.: RTU, SCADA,..).
PI Access - PI SDK vs AF SDK
The PI Asset Framework (PI AF) was introduced in 2007 and is based on Windows .NET technology (AF SDK). This allows vendors like Datagration to connect and integrate with the PI AF SDK through the cloud. But there are still pre-PI AF installations in the field which require to use old Microsoft COM technology-based PI SDK. For that purpose, PetroVisor is connecting through a VM using a web activity (PI SDK – Production Data)to access the data (see below principal flow).
Figure 2: PetroVisor PI Integration Infrastructure
PetroVisor PI Data Integration
As mentioned before, depending on which version of PI Server the customer has installed either PI SDK or AF SDK integration is to be used.
PI SDK Integration – Web Activity
The integration with a ‘PI SDK’ version of PI system consists of two principal activities. The first one is a web activity which will connect to the PI server and retrieve the data into a csv file and upload into the PetroVisor file repository. The second one is then a data integration set which will include a csv data source with the usual mapping for the extracted and uploaded csv file. It is to mention that the mapping can be time consuming if it is not provided by the customer.
Web Activity – PI SDK – Production Data
The web activity has two main configurations, one are the source and credentials to connect, second is the filter which TAG’s should get integrated (see below) and retrieved from the PI server.
It is recommended to use the ‘pipoint.tag’ filter and limit by signal, like in the above example with ‘pressures’.
AF SDK Integration
The concept of the ‘Asset Framework’ is basically instead of dealing directly with the ‘PIPOINT.TAG’ definition, but to have a definition of an element (entity) and an attribute (signal). Now instead of having to know about the tag, the PI AF API can be asked for an entity and a signal and will return the PI point tag. With this tag then the data can be retrieved. These two steps are automated within PetroVisor with the definition of two sources, one for the mapping and one for the data.
We have implemented the concept of two sources, so we can separate the automated mapping from the data load processing as it is time consuming and with PI as a source, we are processing millions of records every hour.
Data Connection
For the PI AF connection, the URL, user and password is required. In case the customer provides a ‘trusted site’ or PetroVisor is running within customer IT environment, the ‘integrated security’ might be used instead of user and password.
When the connection has been created and saved. The connectivity can be tested as usual. But the PI AF connection has some more features as shown on the below screen shot.
With ‘Export PI Tags’ all PI points of a selected node can be exported and will got saved into the PetroVisor file repository as a csv file. With the ‘Explore’ a tool is available, which allows to explore the PI AF for nodes, elements, attributes, templates and so on.
Data Source – Data Integration
I’m starting first with the data source, as we need an initial data source setup before the mapping source can be configured and saved.
So, we just need to setup a new PI AF source with a PI AF connection and add a ‘dummy’ mapping there so we can save the source.
For the configuration we are selecting the PI connection and ‘Import Type’ – ‘Numeric Time’. For the ‘Mappings’ we are then just adding one ‘dummy’, so we can save the source.
For the naming the convention ‘[Template Name] Data is recommended. That will help to understand this source is used for integrating the data and it is getting mapped with attributes from [Template Name].
Mapping Source – Entity Import
We may have already used the ‘Explore’ tool to understand how the PI AF is organized and from which node and template we will extract the mapping information. Therefore, we are setting up now a PI source with import mode ‘Entity’ as shown below.
The sequence then is to select a starting node through <Browse> and then to select the desired template. After that the data source is to be selected, we just create before. In our notation it is the ‘PW Allocation Data’ source.
Then we will do the mapping, switching to the tab and ‘Extract’ all attributes.
As usual, then to map the attributes to PetroVisor signals.
The final step is to select the default entity type and to save the source in this case ‘PW Allocation Mapping’.
Data Processing
The finals steps are then to setup one integration set for the ‘mapping’ and execute it at least once to get the mapping automated into the ‘data’ source. After deleting the ‘dummy’ from the ‘PW Allocation Data’ source, a data integration set can be setup to execute this source manual or through workflow. It is important to have always a start and end date for professing provide, as otherwise a request will time out.
If it is to be expected to have new wells added to the PI AF, then it is recommend to setup a schedule for the ‘mapping’ data integration set, so it can automatically add the new wells and the new mapping.