Back

Cost of Energy Optimised by Reinforcement Learning (CEORL)


Stage

Stage 3

Project Lead

MaxSim Ltd

Project Sub-Contractors

Aquaharmonics Inc., Wave Conundrums Consulting, University of Edinburgh, Quoceant Ltd, Pelagic Innovation Ltd, Marine Systems Modelling

This project is currently ongoing, and final public reporting is due to be published in Winter 2022. 

For further information on the current status, please contact the Lead Applicant.  

 

 

The objectives of the Stage 3 project are to provide evidence of a potential step change in LCOE and to demonstrate this by testing an RL-derived control policy in a physical model in a wave tank with power taken off by an electrical generator. This will provide evidence of policy transferability - that a policy learnt on a numerical model can be transferred to a physical system, that training in one set of conditions leads to a policy that will be effective in different conditions.

We will investigate the limitations of this approach, including limits to policy transferability, difficulties with particular types of sensor, or practical implementation problems, in order to support planning of future R&D effort.

The project will develop intellectual property on the process of deriving a policy for a particular WEC. This will involve assessing whether HIL is a useful step in the process.

The goal of this project is to show that RL-derived policies can either double the energy or halve the loads in comparison to the baseline control. We are confident that this is achievable, and indeed hope to do better than double. However, the main project ambitions are to de-risk the R&D process by gaining a better understanding of opportunities and limitations, and to build confidence and interest in our results within the wave energy community.