Any conversation about tracker resilience, especially with regards to wind and hail, is also a conversation of optimizing electricity generation. In short, the less time that solar modules spend in their stow function, the more time that they’ll spend out in the field generating electricity optimally.
Maximizing generation hours in a project’s lifetime is paramount to delivering on PPA agreements and keeping project revenue maximized. In practice, achieving the lowest possible stow time comes down to three major components: hardware reliability, weather monitoring systems, and tracker controls software.
Surviving the storm
While modules take the brunt of collisions with hail or any other striking object, mitigating damage to a system does not start with the module, but with the tracker.
For Nextracker, this mitigation comes in the form of NX Navigator, a software and smart control system that includes a hail function. This function moves the entire solar array to a safer, 60-degree stow angle.
As might be expected, the issue for modules has always been with repeated, large impacts. For individual impacts, many module faces can withstand a single stroke from an ice ball upwards of 40 millimeters (mm) in diameter, or around the size of golf ball, according to Whitfield, as well as up to 11 strikes of 25 mm hail. By stowing modules, not only is the risk of module face-strikes reduced, but the system is not further compromised by the increased wind speed.
As was reference in the second entry to this series, Arctech Solar and has released a whitepaper outlining resiliency efforts that can be taken to minimize stow time in projects.
For Arctech, this resiliency takes the form of stowing modules at a zero-degree angle in high-wind scenarios, a solution made possible by the advent of rigid trackers and zero-degree stowing in instances of high wind.
Wind stow speed in the age of large-format modules (LFMs) are typically around 12m/second gusts, a relatively common speed, meaning LFM projects will stow more, less energy will be generated, and levelized cost of electricity will be raised. For rigid trackers, the the critical wind speed sits at 22 m/s, nearly doubling the parameters for wind a project can operate under.
In testing, Arctech found that its rigid tracker, stowed at zero degrees, spent just 15 hours, 0.17% of the the modeled year, in stow position, losing 204 MWh in that time, roughly 0.09% of projected generation, for a financial loss of just over $5,700. A traditional tracker with a wind threshold of 16m/s and a 30-degree stow angle spent 243 hours, 2.77% of the the modeled year, in stow position, losing 1,711MWh in that time, roughly 0.77% of projected generation, for a financial loss of almost $48,000. A hypothetical worst-case scenario with a wind threshold of 12m/s and a 45-degree stow angle spent 1,288 hours, 14.7% of the the modeled year, in stow position, losing nearly 9,300MWh in that time, roughly 4.17% of projected generation, for a financial loss of over $260,000.
Eye in the sky
Since hail is a known stow-causer, it is important to know now only where these storms happen but how often. The hail modeling sector has grown rapidly in the last three years, and pv magazine has had conversations with Peter Bostock and John Sedgwick of VDE Americas, who have been working on more accurate models to predict the return interval of stow-necessitating hail.
For most of the solar industry’s history, hail threat prediction models were both general and minimal in nature, with developers relying on basic heat maps to locate what areas had severe hail potential. Applying this approach to an asset spread over a large project footprint which has to operate at peak potential for 30 years is way too general. Modeling needed to be done on a granular, site-specific basis.
“Even at a given location, there’s a size distribution,” said Bostock. “And if you map larger locations, there’s a broader spread to that location.”
To fill this need, VDE developed a tool that blends data from human storm spotters as well as from doppler radar.
Spotter data is useful because it can provide tangible context as to the size of the hail as well as its concentration and distribution. Doppler radar also is used, as it scans and tracks weather data constantly, typically at an 11 degree angle towards the sky. This means that the scanning elevation is higher as distance increases from the radar center. This is better for tracking hail, which forms high in thunderstorms.
By overlaying spotter data with radar feedback data and finding correlations between the two, Bostock said that the radar data an be corrected for the likely size and concentration of the hail in real-time. VDE used this approach to develop a tool that can predict the return interval, the likelihood of a certain type of hail occurring, and the expected size of hail across different locations.
Sedgwick and Bostock said that by applying appropriate mitigation elements, operational elements, and equipment installed with appropriate stow management, the effects of hail can be significantly mitigated.
Everyday operations
What happens on the majority of the days of the year; the ones where there is no extreme wind or hail, days where trackers will spend the entire day going through their predetermined functions, with no interruptions? How do you squeeze every Watt out of a sunny day?
According to Dean Vukovic, the General Manager of Terrasmart’s ground mount division, achieving a project’s peak, optimal yield is about the three Ps: predict, protect and produce.
“On the predict front, we’re using an machine learning-based approach, that’s taking out a rolling history and modeling what our expected performance should look like, at least from the mechanical and the structural side,” explained Vukovic. “So that we can go harvest information, deliver that to the OEM teams, and predict with confidence what we believe that system is going to do. Within that, we’ve got some pretty advanced weather monitoring algorithms, where we plug into various APIs and make sure that we’re doing the best job possible to predict a future weather event that could cause some sort of harm or disruption to the system. We’ve put a lot of time, effort and energy into that space, because that’s going to be one of the one of the biggest things that can cause some variability in these systems’ performance.”
According to Vukovic, getting these predictive models as accessible and encompassing as possible is paramount for Terrasmart, as they form the basis for the company’s protection and production software solutions.
“From a protection point of view, we’re using some of those predictive algorithms to really form as the basis of protection,” he began. But, for us, we’ve also invested a fair bit of time into how we deliver information in the best possible way, so that teams or asset owners can really take the appropriate action. Our platform, our graphical user interface, how we deliver information at a portfolio level, that whole engagement process between harvesting the data at the row level and delivering it up to people and teams that can take action; this is front and center for us. and we feel like we’re pretty innovative in that space.”
Terrasmart is now on the fourth year of developing its predict, protect, and produce software, dubbed Peak Yield, with additional improvements to the solution based on user feedback.
“On the technical side of protect, we have and continue to bake a lot of smarts into the robots, the network controllers, and weather stations themselves, to have them be as self sufficient as possible,” said Vukovic. “Whether it’s power or battery management, network conductivity or comms related issues, or some mechanical average variation, whatever the case may be, we really distill that information in a clean, crisp fashion that it allows people to take action.”
And while relaying data on it’s own doesn’t seem like it would minimize downtime in project generation, Vukovic shares that the more educated and in-tune EPCs and asset operators are with their projects, there will be less downtime.
“If we can get that information delivered in a succinct way, then it means that these teams aren’t tearing through just slabs of data to understand if there’s an issue the system and software is doing that already for them,” said Vukovic.