Ioannis is a Ph.D. Candidate in the Department of Mechanical Engineering at the University of Sheffield. Prior to joining Sheffield in 2015, he worked as a Research Assistant in the Centre for Automotive Engineering at the University of Surrey. His work, in collaboration with leading automotive manufacturers, led to the development of novel control systems for vehicle handling and ride comfort improvement. Ioannis holds a MEng degree in Mechanical Engineering from the University of Bath, where he graduated in 2012. In his degree, he specialized in modelling, simulation and control of mechatronic systems.
Research interests
- Robotics
- Machine learning
- Acoustics
- Vibration and signal processing
Current research
I am currently working on developing models and control methodologies for nonlinear systems using machine learning. An example of a nonlinear system that I am trying to model is a gas turbine engine and, more specifically, its combustion process. Creating a nonlinear mapping from an engine’s experimental observations, e.g. combustion chamber pressure and temperature, is an important step towards analysing and optimising its performance. Arriving to an accurate and reliable model is particularly challenging because the algorithm must be “trained” such that it can “learn” the nonlinear characteristics of the engine from a small finite set of noisy observations. During the first steps of my PhD, I have demonstrated successfully that machine learning can be used to predict an engine’s health state by monitoring its structural vibration response at various operating conditions.
Reinforcement learning is a subfield of machine learning that has shown great potential in robotics recently, e.g. in tracking trajectories under uncertain environments. In this project, the focus is to demonstrate an improvement in control performance, for instance, in stabilizing nonlinear systems that are subjected to high disturbance levels in the environment that they operate in. In general, this is possible because an optimal control policy is obtained from the observations that the system is gathering, rather than generating a policy based on an approximate theoretical model. Although robust control methodologies were proposed to consider model uncertainty in designing a controller, changes in the system and environment may still occur, which necessitates system updates continuously. In safety-critical applications such as in a gas turbine engine, these effects must be incorporated in the control policy to maintain the initial guarantees of stability and performance. This is where reinforcement learning has an advantage over classical control. In reinforcement learning, an agent is used to learn a control policy by interacting with its environment. This is a sequential decision making approach that is modelled as a Markov Decision Process. However, there are both theoretical and practical challenges to be faced with these methods also: stability guarantees during the online process of learning and policy evaluation, among others. These issues must be addressed if we are to utilize its full potential in robotics.
This project will apply the Bayesian learning framework to reinforcement learning, and more specifically a well-known nonparametric Bayesian method: Gaussian Processes. In this framework of reinforcement learning, the controller parameters are inferred from the noisy observations the robot is gathering. An attractive property of this approach is that the associated uncertainty in prediction is available to us. The Plant in Figure 1, represents an unmanned aerial vehicle, which is controlled by changing the speed of each of its four motors, independently. Using a high-speed motion capture system, we can calculate with high accuracy its pose, as required by our controller.
Figure 1: General framework for learning control policies.