Back

Cost of Energy Optimised by Reinforcement Learning (CEORL)


Stage

Stage 2

Project Lead

MaxSim Ltd

Project Sub-Contractors

Caelulum Ltd, Aquaharmonics Inc., Wave Conundrums Consulting, University of Edinburgh, Marine Systems Modelling, REOptimize Systems

The CEORL project uses reinforcement learning (RL)  to learn good control policies for several classes of wave energy converters (WECs).  The policy has been learnt in simulations of a WEC, and then transferred to a real WEC where further learning can occur.  Control policies are specific to the class of WEC, but the RL algorithms that learn them are not.

The CEORL project has the potential to overcome the following challenges for WEC control:

  • Absence of adequate models: model-predictive control is only as good as the model, and it needs bespoke development for each device type.  Model-free RL does not require a model.
  • Need for wave-by-wave control: Sophisticated wave-by-wave control could improve the economic viability of WECs by balancing competing requirements of high and low loads.  Large capture widths require large forces, however, large forces increase operational costs due to both peak and fatigue loads.  Finding the correct balance between these competing requirements is likely to lead to the lowest LCOE.  RL will be rewarded for learning control policies that trade-off these competing requirements to find policies that give the lowest LCOE.

Control Systems Stage 2 - Public Report - MaxSim Ltd

Control Systems Stage 2 Public Report for the MaxSim "Cost of Energy Optimised by Reinforcement Learning" project. Includes a description of the technology, scope of work, achievements and recommendations for further work.

View Details