Data creep is an ugly name for a common practice, that of adding special models to community databases. This is no less prevalent in databases of interconnected grids. In the Eastern Interconnection, a planning database will contain on the order of 70,000 buses and 30,000 dynamic models, representing everything from Florida to the Texas panhandle, from Idaho to New Brunswick, at voltage levels from 34.5 to 765 kV. Whether we need all the data for any specific analysis is moot, however, frequently studies will carry the full dataset regardless that the focus is on localized
phenomena. Equivalents or reduced models were necessary in the past when computers had memory restrictions and low speed performance. Nowadays, reduced models may be justified for the sake of simplicity and convenience as long as accuracy is not compromised.
Part of the dilemma is due to the highly-meshed nature of the grid, that using equivalents comes with the concern of reduced accuracy. Another part of the dilemma comes from the widely varying practices in modeling – the level of detail, the bases for data such as ratings, and standards for analysis. But perhaps, most concerning of all, is the recent popularity of the practice of adding closed special models for equipment such as hvdc converters, wind turbines, exotic storage devices, composite loads, among others. These models come in executables or dll’s that are closed to anyone but the providers of the models. Users of the models basically must trust the providers that the models faithfully represent the equipment response for the the study purpose, and that the models will not introduce intractable numerical errors in simulations. Users have the alternative of using a generic or standard model in place of the special model, at the risk of misrepresenting the characteristic and bringing to question the accuracy of the analysis.
The basic arguments for the practice are clear enough. Those favoring closed models point out the need to keep model information strictly confidential because manufacturers would like to secure proprietary data for special equipment, avoid misrepresentation of the performance, and retain rights to key parameters and functions. Those who favor open models stress the need to be able to debug the special models when they produce unexpected results, and to understand the basis for the modeling – critical to designing system solutions that can actually work.
There are obvious misconceptions and derived malpractices. Among these:
- That the models provide comparative performance with other models in field applications – Since each model is proprietary and tested in accordance with specific conditions defined by the developers, there is no direct comparison of their response characteristics to a common set of tests.
- That the models are the most accurate representation for all applications – Every model has a limited range of application. There is such a wide variety in planning and operations analyses that it is impractical, if not impossible, to model response for all conditions and disturbances. Developers will aim for the expected applications and wait for user feedback.
- That a specific model is better than a generic one – If an analyst is running a transient stability time simulation in quarter-cycle time steps, how significant is the twelfth-cycle response of a set of controlled thyristors, in say an hvdc model?
- That a model supplied by a software vendor is foolproof – Not to open a broader debate on the subject, but good software comes from users helping debug them over the course of man-day, months and years of application.
- That a model will carry forward or be backward compatible for other versions of the software – Guess again.
Horror stories abound. One such is the case of the blackbox model of a new wind generator that was actually no more than a Norton equivalent — a current source behind an impedance. This model went the rounds, being embedded in a few studies. How many planning decisions were made with this contrived model?
In the Western Interconnection, providers of the widely-used stability software, PSLF, seem to have nipped the problem in the bud by insisting and standing by a policy of open modeling. This is not to say that the pressure and need to allow for closed models isn’t there, but asserting the planner’s right to use the database as intended deserves a good cheer.
Unfortunately, this solution seems to be beyond reach in many other cases. The practice of closed user models may have become too widespread and, in instances, may even have the support of software suppliers, who are themselves manufacturers. An alternative in this case may be to set standards for the models in terms of tests, comparisons, numerical performance, allowable ranges of input voltages and currents, allowable time step size, allowable duration of simulation, and so on. Also, it would be important to require clear specification of what “alpha” or “beta” versions really mean from developers.
A novel way to solve the closure/disclosure dilemma is to establish a patent for each model and disclose the block diagram and all associated information, including software code. The patent will protect the manufacturer from being copied and at the same time will disclose the information needed in the power industry for the pertinent analysis. This is of course assuming that the information and technology is patentable. The approach may provide user’s with broader leeway is applying models in studies. Additionally, a regulatory body could help with the official update of the models and associated software.
As new technologies enter the landscape of grid models, it is clear we will need a better way of handling them that fits the needs of planners, regulators, analytical providers and equipment suppliers.
For questions, comments and further discussion, contact us at email@example.com
- “Let me just say that nothing has hindered our purchases of models more than software patents. We have spent thousands and thousands and thousands of dollars on legal fees rewriting contracts to avoid the legal liabilities of software patents. If you patent software, we are exposed to patent liability. If you use someone else’s patented software and then sell us your software, then we are exposed to the patent liability. We have no control over that exposure and it can be very large.In contrast, copyright liability is much more limited. We can handle that. Wherever possible it is better to use copyrights rather than patents.” Name withheld upon request.