A utility function is the thing an agent want to maximise
up:: AI Alignment MOC In reinforcement learning, agents receive rewards based on their behaviour. In reinforcement learning, agents seek to maximise reward TK. The utility function (or terminal goals) it the thing that an agent seeks to maximise rewards for.
For example, training an agent to play a video game, it would receive rewards for every time it’s score goes up therefore seeks to increase its score.
It is hard to define utility functions that are actually in line with our goals: Specification Problems TK.