Abstract
In the existing approach to maintenance and repair decision making for infrastructure facilities, policy evaluation and policy selection are performed under the assumption that a perfect facility deterioration model is available. The writer formulates the problem of developing maintenance and repair policies as a reinforcement learning problem in order to address this limitation. The writer explains the agency-facility interaction considered in reinforcement learning and discuss the probing-optimizing dichotomy that exists in the process of performing policy evaluation and policy selection. Then, temporal-difference learning methods are described as an approach that can be used to address maintenance and repair decision making. Finally, the results of a simulation study are presented where it is shown that the proposed approach can be used for decision making in situations where complete and correct deterioration models are not (yet) available.
Original language | English (US) |
---|---|
Pages (from-to) | 1-8 |
Number of pages | 8 |
Journal | Journal of Infrastructure Systems |
Volume | 10 |
Issue number | 1 |
DOIs | |
State | Published - Mar 2004 |
Keywords
- Decision making
- Infrastructure
- Maintenance
- Rehabilitation
- Stochastic models
ASJC Scopus subject areas
- Civil and Structural Engineering