Friday, August 24, 2007

OPC Connectivity All the Way – Making the most of it

What’s common between ExxonMobil, Shell, Ford, General Electric, Dell, Pfizer, and 45% other Fortune 500 companies? Other than the fact that they are all highly successful businesses, they all rely on OPC to help them stay increasingly profitable.

In a world driven by reduced prices and decreasing margins increasing efficiencies in process and manufacturing is survival, interoperability and integration of information systems is vital. The need for interoperability and integration of systems can vary as business dynamics vary. A reduction in inventories requires increased response times. For better planning and decision making, access to accurate and current information is needed. Those looking for improved consistency and quality, require more automated business processes. Those facing increased customer service and compliance reporting demands require access to additional information. All roads lead to improved interoperability, and OPC connectivity is paving the way, all the way!

Interoperability can be defined as “the ability of applications and systems to communicate and exchange services with each other based on standards and to cooperate in processes using the information and services”. To put it more simply: Everyone accesses information, gets data, shares data, understands data and uses the information in a standard way.

OPC has grown in stature over the last decade. From Microsoft’s COM/DCOM technology based standards, to the recent platform independent OPC UA (Unified Architecture). OPC UA extends the domain of interoperability from plant floor to enterprise level applications, and is going to be the architecture of choice in the near future.

OPC Enables Integration
Regardless of what business a company is in, market dynamics are demanding faster response times and more efficient decision making. Customers need more data, faster, derived from multiple sources and delivered simultaneously to many destinations. In order to achieve the benefits of flexible, scalable and interoperable systems, without high integration costs and time, the solution must also be standardized across multiple vendors, systems and products. That is the challenge OPC successfully addresses.

In a nutshell, OPC provides a functional interface for reading and writing data in an efficient and deterministic way. There are separate specifications to address different data semantics, including real-time data, historical data, alarm and event information and batch data. The interfaces are comprehensive enough to provide the functionality that users require, yet simple and practical to implement, which results in wide vendor acceptance. The OPC specifications are implemented on Microsoft’s distributed binary communication protocol, DCOM. This offers several advantages, including high speed data transfer capability, efficient handling of multiple client/server connections and built-in operating system level security. In addition, many of the major control systems, machine interfaces, historians, expert systems and other automation applications are widely deployed on the Microsoft Windows platforms. Proper adherence to the OPC standards is aided by the OPC Foundation Compliance Testing tools and product interoperability sessions. These factors have led to the creation of fast, flexible and reliable connectivity solutions that businesses require.

OPC UA Drives Interoperability
Since OPC offers so much in the way of connectivity, it begs the question ‘Why introduce OPC UA?’ The primary purpose of the classic OPC was to solve the integration problem between devices and PC based client applications. The automation industry’s desire for connectivity standardization has led to OPC being used in a wider range of applications than was originally considered. The scope now extends to enterprise level interoperability, which includes applications from the field level all the way to realm of Enterprise Report Planning software, across multiple hardware platforms, and in globally diverse installations. As technology and market requirements change, so must the interoperability standards, therefore OPC UA extends the scope of the classic OPC specifications. The single OPC UA architecture encompasses and unifies the functional data format for real-time, historical, event based and batch information. The OPC UA specifications also go farther in setting standards for application security, reliability, audit tracking and information management. These are key components in an interoperable enterprise architecture.

The OPC UA specifications are implemented on a service base architecture, which leverages existing standards such as XML, SOAP and the WS initiatives. Services based implementations are supported by Microsoft as well as many other operating systems. This means OPC UA will be available on more platforms, including embedded operating systems. This promotes the power of standard based connectivity across more layers of the enterprise. A service based model also allows OPC UA to leverage standard security aspects such as authentication, encryption, data integrity and auditing. These are important features for companies facing increased security requirements.

In addition to extending, unifying and allowing backwards compatibility with existing OPC products, OPC UA offers a rich information model to better transform the data into information. Not only does OPC UA allow access to multiple data sources and formats, the architecture also supports reference semantics so client applications can discover and understand the information they are collecting. These capabilities offer the promise of more powerful OPC UA client applications in the future. The same flexible, secure interfaces could be available on a smart transmitter, the control system operator station, the historian, maintenance database and the manufacturing execution system; A single interoperable data highway from the shop floor to the top floor.

ICI use OPC connectivity to drive competitive advantage
As the world becomes more and more comfortable with OPC, benefits of sharing and compiling data across the plant floor become more and more evident. Investments in centralized data repositories are now making more sense than ever before. At the Polyester Fire Plant of ICI new Lahore, Pakistan, the management took similar steps. Increased pressure on production to enhance manufacturing efficiencies required them to know where the inefficiencies actually were located.

Using OPC connectivity, data from Siemens, Foxboro, Rockwell systems was all integrated in literally plug and play manner into a real time data management system powered by SENSYS’ IntelliMAX Plant v3.0.

The management was now able to historize plant wide data, run reports to identify process bottle necks, and then take steps to remove those bottle necks.

As time went by, exponential increase in the productivity of the plant was observed. 3 months after the SENSYS Solutions team had helped deploy the Real Time Data Management System an over all increase of 10% in productivity was observed.

Wednesday, August 22, 2007

What happens when your machines start talking to you?

Its ten O’clock in the evening and your cell phone rings, its your dry process area Kiln 21 Variable speed drive XJ721, he says his temperature has just crossed the high alarm limit point. As soon as you hang up, your compressor WB21 calls, his maintenance is due in 15 days, just checking up to tell your that he’s told the asset manager to arrange for his spares.

Your production manager’s life is no different, his plant floor keeps telling him when orders are completed, his machines tell him about their over all equipment efficiency, and their down time KPIs. Even the sales guys know production again which purchase orders has been completed. Their forecasting is now more accurate then ever before.

The above scenario is possible because of a unified plant-wide data repository that communicates with enterprise applications seamlessly.

The ICI Polyester plant management have their machines talk to them on a real time basis. Their work in process inventory is at its lowest ever, and their production is at its highest ever. Only 20% of all plants in the world enjoy these efficiencies, the reason why production intelligence is limited is because of the complexity in its implementation. Also the return on investment can only come if the management is successfully able to make use of all the information that becomes available to it.

The exact same problem was faced at ICI Polyester Fibers, a Premier Synthetic Fiber plant in Pakistan. The plant was commissioned in 1982. With numerous expansions, the plant today has a capacity of over 350 Tonnes per day. Faced with stiffer market competition, ICI Polyester even employed SAP to have better information flow. Whereas SAP solved many of their managerial problems, organizational islands still remained. To cope with that, ICI needed to bridge their plant floor with the rest of the organization. However, this was not an easy task, considering the fact that ICI Polyester housed around 30 automation solutions of various vendors over its 25 years history.

They needed a product that could work as a global data integrator and interpreter at the same time. Their choice of IntelliMAX was for these reasons, and for the applications support that their integration vendor would get from the SENSYS solutions (http://www.sensys.com/) team.
To bridge so many automation islands, a unified layer was needed to be developed which would gather information from all the islands, bridge them together and make the information flow transparent. That layer was OPC. ICI Polyester Fibers contact Applications & Solutions team of Sensys Inc. for the design and development of this MES layer.

The project was broken down into phases. The first part of the project was to develop a central repository accessible to all the relevant functions. When this central repository has been linked to all the Automation islands, it will then be linked to the SAP, thereby, feeding the accurate and real time data to all the relevant personnel within the organization.

IntelliMAX, being grounds up OPC compliance, was the obvious choice for this task. For the initial phase, key critical areas where identified. OPC drivers for the different vendors brought in seamless connectivity to IntelliMAX.

SENSYS solutions team worked closely with the ICI engineering team and their automation contractors in understanding the current operations, and identifying how best to extract information out of them to de-bottleneck production processes.
Following a the need analysis and implementation phases. Thanks to IntelliMAX, Rizwan Afzal, Engineering Manager at polyester fibres, even gets a new year greeting SMS from some of his motors.

Tuesday, August 21, 2007

Product Life Cycle Management – Migration to New Technology & Managing the Migration Process



The product life cycle principle declares that all products are born, go through a maturity cycle and then eventually retire. Graphically represented, the product life cycle graph is a line illustrated in fig 1.


Fig. 1 Product Life Cycle

Time factor in this graph is determined by a number of variables. When it comes to automation software products specifically, these variables could be, in no particular order of importance:

- Software development technology evolution – SOA, XML, SOAP, SQL,
- Communication industry progress – bandwidth and networking technology
- Computer hardware industry progress – processor speeds, RAMs etc.
- Industry consolidation – mergers acquisitions, technology partnerships etc.
- Industry standards – OPC, TCP/IP, ISA 95 etc.

These are to name some of the variables that affect the time factor. The combined turbulence of these variables tends to decrease the number of years that a product takes to get from point of first commercial release to maturity.



As products move from first introduction to maturity, the noticeable changes are
- No more new features being introduced at functionally stable,
- Limited tech support at the maturity level,
- No spares support or tech support at the retirement level.

Decreases in product life cycles leave businesses with decreased returns on their investments as they find themselves increasingly pushed to move to new technology. Typical reasons that compel for migration to new technology are:

1. System Technical Support – Vendors pull back technical support for systems that have been declared obsolete
2. System Spares Support – vendors pull back spares support for systems that have been declared obsolete
3. Data portability – Legacy systems keep information in isolation, whereas the need increasingly is to have data interchange services on an on-demand basis.
4. Most importantly, cutting edge technology is built around the concept of increasing operational efficiencies and decreasing costs. Investing in technology more often than not does yield in higher returns in investment.

With the process of continual evolution to new technology, and as new technology of today becomes the legacy system of tomorrow, managing the migration process is a perpetual process that organizations find themselves involved in.

The key to planning the technology migration process is driven by:

1. Automation technology products that are based on open standards – give yourself the flexibility to work with the range of vendors of your choice, fetch data from every one, and analyze it through out your migration process.
2. Standards Based connectivity products – leads to the ability to integrate with legacy systems easily – this allows for a planned and phased out evolution plan.
3. Products that are built on technology architectures that provide for a 18-20 year product evolution life cycle – migration is inevitable, at least now you will only be upgrading product versions every three years, as opposed to a complete system over haul and then handling change management issues within the organization.
4. Products that are scalable – migration is complex enough with technology evolution, it should not be complicated with having to migrate due to expansions as well.


Technology Migration Plan at Japan Power Generation Limited

Japan Power Generation Limited, a 135 MW with 24 medium speed diesel engines power plant was installed with the OEM provided Diasys system.

Japan Power was faced with similar technology migration issues as are faced with every organization today. The technology migration strategy at Japan Power was:
1. Select a product that is based on open standards
2. Phase out the migration process
3. Provide a thorough industry standard OPC interface to their entire automation system.
4. Use the standards now deployed to live out the entire life cycles of the products being used.
The powers of Standards Based Connectivity

When Japan Power initially started out in their venture to develop a migration system they were faced with a number of issues, the first and foremost issue was to collect data from a proprietary system into a system that worked on open standards. OPC (www.opcfoundation.org) was the logical choice here for the following reasons:

1. Managing the continual migration process would only be possible if they were using an open standards product.
2. Fetching data from a proprietary system would have only been possible by using OPC.
3. Using industry standards they now could phase out their migration plans.

IntelliMAX, with it’s through and through standards based connectivity interfaces was chosen by Japan Power. Not only did IntelliMAX provide them with all six interfaces of OPC, it also provided them with ODBC, SOAP and XML data connectors for eventual enterprise wide integration.

The SENSYS Solutions (www.sensys.com) team joined hands with the Japan Power engineering team to help them develop the most amicable solution. IO Server was chosen as the OPC server to fetch data from the Diasys DCS via Thicknet interface and to feed it into IntelliMAX server through IntelliMAX DA client.

The Power of DataMAX – Advanced Historian

Japan Power’s management was previously not able to archive data. With the integrated historian DataMAX, they were now able to archive history data, analyze for plant metrics and evaluate production performance in an objective manner. To top it off, ReportMAX fulfilled the need of a dynamic reporting tool offering both preconfigured and custom Web or Excel based historian reports; both periodic and on-demand.

Way Forward

The SENSYS Solutions team has been following up the migration process closely with Japan Power management. The vision is to migrate the Diasys hardware to an open connectivity hardware as well. Again the same principles that were adopted in the first phase of the migration process will be followed.

Japan Power now has a fully functional migration system in place; no more does it need to worry about support being pulled back by the vendors, or spares not being available.

As they look for expansions on their plant, they know that their existing systems are built for scalability and will expand with their business.

The Real Time SOA architecture used by IntelliMAX is cognizant to the product having a 20 year plus life cycle.

Japan Power is now planning to link real time plant floor data with their enterprise applications to in-fact practice process and manufacturing efficiencies. Having invested in an open architecture-based versatile solution, IntelliMAX, JPGL is now geared to fulfill its automation software needs of today and tomorrow.

The Trade off Prism and Cue Factor – Impact of Concurrent Object Oriented Engineering


The trade off prism
An automation project as is the case with any other project is about schedule, budget and functionality, i.e. the trade off prism. As you increase the features in the application, timeline and money suffer, reduce the timeline, and money and features suffer, decrease the budget and both timeline and features would suffer.

Mythical Man Months
Closely related with the tradeoff prism is the concept of Mythical Man Months. Just like the trade off prism, the concept of mythical man months applies to any project as well.
The relationship between the total number of person hours needed to complete a job, and the total number of resources that have been actually deployed on the project is not a linear one. As you increase the resources deployed, their cumulative efficiency decreases with the cue factor coming into place. The cue factor is an empirical relationship that takes into account inefficiencies that a team of individuals working together are bound to have.
Th = Tp x Wh x (1+α)

Where:
Th = total time taken to complete the project in man hours
Tp = Total resources employed on the project
Wh = The total number of work hours available per resource -assuming time given by each resource is constant
α = Cue Factor, an empirical relationship calculated by the diversity in the group (D), no. of people in the group (N), no. of years they have known each other (B) and independence of one team member’s work to the rest of the team (µ).

α = N x B x D / µ

Project managers world over are faced with the challenge of fighting with α. As team dynamics become more coherent, α comes closer to 0, newer and bigger teams with increased social diversity tend to have α as high as 20. Evidently, as you increase µ, α is bound to decrease in value.

µ is measured as follows:

µ = ∑ µ i

where i is the total number of team indiviuals

As µ approaches ∞, α approaches 1 and the concept of Mythical Man Months converts to 100% team efficiency.

Concurrent Engineering and µ
The task at hand for any project manager now is to bring µ to as high a value as possible. There are two ways to increase, each having a profound impact on the value of µ independently and jointly.

- The total number of engineers that are working on the project
- The dependence that each engineer’s work has on the rest of the project

Effectively, if the project tasks are broken down into basic building blocks, such that each engineer works in his own area, having little or no impact on the work of the rest of the team, µ approaches ∞ and α approaches 1. What this also means is that the size of the team can be increased to theoretical limits as well i.e. ∞. Of course there are budgetary and other constraints that affect the size of the team that can be deployed at a project any given time.

But the question is, how do you divide a project into granular segments, and then rejoin it into separate projects to ensure that no one engineer has any impact on the work any other member of the team. The answer simply lies in concurrent engineering.

You breakup the entire project into its basic brick size, and then put together separate work areas for every engineer, each resource has his or her own access area into the project environment, where they enter, do their engineering, and leave, not even considering what the others are doing.

The catch then would be accumulating the entire broken up project! well, not unless, the breakup process was a purely virtual one, and in effect, the entire project resided on a central location, and only access areas were demarcating the engineers. With the application residing on a central server, and the engineers accessing it through their thin clients, working within their own secure engineering areas, the project was actually never broken into pieces at all!

Concurrent Remote Engineering in a Flat World
In a flat world, businesses bring in the most efficient resources at a central virtual work location, and they carry out their business. Whether it be outsourcing tax returns calculation to Bangalore, or selling credit cards from Hanoi. The same applies to automation project engineering. Imagine if you had the best of the engineers working together on the same server, sitting any where in the world.

With concurrent remote engineering, project managers the world over have experienced a 25% decrease in their overall effective engineering time, that’s α brought to exactly zero.

This is what the project managers and engineers working on the OGDCL Uch Gas field experienced first hand. Upgrading a legacy automation software system to SENSYS’ IntelliMAX (http://www.sensys.com/) they made complete use of an object oriented concurrent remote engineering environment.

Multiple engineers were working, based out of different locations, on the same project, and were able to engineer and commission a complete plant in less than two weeks. Almost half the time that the initial project management team had envisioned
The engineering team was comprised of eight engineers working concurrently on the project from Houston, Uch and Lahore.

The initial project specifications did not include configuring the advanced historian DataMAX in the application, however, when the Project Manager, H Tariq observed the team being ahead of plan, and under budget, he decided to include this facet of the project in the delivered application.

With the job completed 30% under budget, and with features over those originally agreed with the customer, the impact of concurrent remote engineering on the trade off prism and the cue factor was deeply felt in saved time and money.