Adil Khan 9 months ago
AdiKhanOfficial #FYP Ideas

Deep Reinforcement Learning Based Data Driven Vector Control Design of Induction Motor Drive

With the development of high-speed microprocessors, it is now possible to implement mathematically complex vector control algorithms without compromising on the performance of motor drive. Among vector control techniques PID, direct-torque control (DTC), field-oriented control (FOC), model-predictiv

Project Title

Deep Reinforcement Learning Based Data Driven Vector Control Design of Induction Motor Drive

Project Area of Specialization

Artificial Intelligence

Project Summary

With the development of high-speed microprocessors, it is now possible to implement mathematically complex vector control algorithms without compromising on the performance of motor drive. Among vector control techniques PID, direct-torque control (DTC), field-oriented control (FOC), model-predictive control (MPC) are being widely used in industries. But their limitations have urged researchers to develop more advance techniques.
A novel approach is to use reinforcement learning (RL) to have an agent learn electric drive control from scratch merely by interacting with a suitable control environment. RL achieved remarkable results with superhuman performance in many games and are also becoming more popular in control tasks, such as cart-pole or swinging pendulum benchmarks.

The open-source Python package gym-electric-motor (GEM) is available for ease of training of RL-agents for electric motor control. Furthermore, this package can be used to compare the trained agents with other state-of-the-art control approaches. It is based on the OpenAI Gym framework that provides a widely used interface for the evaluation of RL-agents. The package covers different three-phase ac and dc motor variants, as well as different power electronic converters and mechanical load models. Considering the relevance of this package it has been utilized in the proposed work.

In the proposed method, the dynamic model of induction motor is updated adaptively based on discrete-time data from load variations within the built-in environment. An intelligent controller based on the deep q-network (DQN)and linear annealed policy-based algorithm that controls a squirrel cage induction motor (SCIM) is presented and will be benchmarked with different reinforcement learning algorithm-based controllers.

In this project, Deep Reinforcement Learning Based Data Driven Vector Control approaches for the vector control of three phase induction motor has been explored and developed. Specifically, A3C, A2C, DDQN and DQN performances will be compared and benchmark results will be validated for the given task. The team also aims to develop a new reinforcement learning algorithm based on the algorithms prescribed above.

Project Objectives

Electric motor control has been an important topic in research and industry for decades, and a lot of different strategies have been invented, e.g., proportional–integral (PI)-controllers, DTC, FOC, model predictive control (MPC) and FCS MPC . The latter methods have limitations or they require an accurate model of the system. Based on this, the next control action is calculated through an online optimization over the next time steps. Typical challenges when implementing MPC algorithms in drive systems are the computational burden due to the real-time optimization requirement and plant model deviations, which lead to inferior control performance during transients and in steady state.
The solutions to the problems discussed above can be solved by incorporating an intelligent Deep Reinforcement Learning based model-free control methodology, which can further reduce the parametric dependency and computational burden of the control algorithm.

As the recent advancements in deep reinforcement learning and its effectiveness in motor control can be exceptional, the team aims to develop an intelligent deep reinforcement learning (DRL) based algorithm for Induction Motor Control. Further, benchmarking the results with different approaches in DRL and development of a novel control algorithm within the Deep Reinforcement Learning framework and OpenAI GEM toolbox is taken into consideration.

And finally, a research article in National/International Journal shall be published 

Project Implementation Method

The method of controlling the currents in an induction motor to regulate speed is known as induction motor speed control. Induction motors are popular for variable frequency applications such as industrial drives and electric cars, despite their common use in fixed frequency applications. An inverter adjusts current to the stator windings for variable frequency operation. Magnetic fields in the stator and rotor are coupled in induction motors. Currents in the stator cause a revolving magnetic field in the rotor, which causes currents and a trailing magnetic field. Due to the magnetic field interaction, the rotor spins at a lower angular speed than the stator field's rotational speed. Slip, or rotational lag, supplies torque to the motor shaft. The slip and motor torque production will increase as the load on the motor is increased. Speed control utilising field-oriented control (FOC) adjusts Isd and Isq for a squirrel cage-type induction motor, ensuring that the flux is proportional to Id and the torque is proportional to Isq. This method expands the speed range while also improving dynamic and steady-state performance.

GEM’s environments simulate combinations of the converter, electric motor, and load, as depicted in Fig. 1. This section includes short explanations of all included technical models.

The real-time data from the environment trains the agent on a reference trajectory and the agent controlls the controller included in the GEM pakage.

As depicted in Fig. 1, the basic RL setting consists of an agent and an environment. The environment can be seen as the problem setting and the agent as a problem solver. At every time step t, the agent performs an action at ? A on the environment. This action affects the environment’s state, which is updated based on the previous state st ? S and the action at to st+1. Afterward, the agent receives a reward rt+1 for taking this action, and the environment shows the agent a new observation of the environment ot+1. For example, in the motor control environments, the observations are a concatenation of environment states and references. Based on the new observation, the agent will calculate a new action at+1. The agent’s goal is to find an optimal policy ?: S ? A. A policy ? is a function that maps the set of states S to the set of actions A. An optimal policy maximizes the expected cumulative reward over time. Due to the dynamic character of the environment, the state and the reward at a time step t depend on many actions taken previously shown in Fig.2. Therefore, the reward for taking an action is often delayed over multiple time steps.

Benefits of the Project

Induction motor drives are now fundamental aspects of electrical drive, and these motors are used for position, speed, and torque control in a wide range of industrial applications. Motors have been more automated and productive since the development of motor drives. The demand for electrical energy has been steadily increasing over the last few decades, and with the development of power electronics devices, electrical energy can now be easily managed to achieve an energy efficient system. Induction motor speed control was difficult in the past, but with the arrival of power electronics, numerous control techniques have been developed.

Due to the recent advancements in artificial intelligence (AI), model-driven techniques are being extensively replaced by rule or data-based approaches, and these approaches (in which intelligence is not explicitly provided to the system, but acquired over time) have proved to be more effective. The basic idea behind reinforcement learning is to create a so-called agent, that should learn by itself to solve a specified task in a given environment. In motor control, the effectiveness of DRL is exceptional and is the focus of attention for researchers. The open-source Python package gym-electric-motor (GEM) is available for ease of training of RL-agents for electric motor control. Furthermore, this package can be used to compare the trained agents with other state-of-the-art control approaches. It is based on the OpenAI Gym framework that provides a widely used interface for the evaluation of RL-agents. The package covers different three-phase ac and dc motor variants, as well as different power electronic converters and mechanical load models. Considering the relevance of this package it has been utilized in the proposed work.
Further, different open-source RL toolboxes, such as Keras-rl , Tensorforce, or OpenAI Baselines, are built upon the OpenAI Gym interface, which adds to its prevalence. Furthermore, recent RL-algorithms, such as or imitation learning approaches , can be applied to electric motor control with GEM, too. For easy and fast development, RL-agents can be designed with those toolboxes and afterward trained and tested with GEM before applying them to real-world motor control.

Technical Details of Final Deliverable

The ideas of deep learning have been present for a while now, and their most well-known representative, supervised learning, has continued to succeed. Supervised learning allows you to adapt a function approximator in such a way that an input vector is mapped to a desired output vector (target). The methods of (deep) RL allow the network to learn how to make optimal decisions given different environmental states. Here, the quality of the network output is rated based on its impact on the environment by a reward. In this project, the first task is to control the Squirrel Cage Induction Motor (SCIM) which is already built in Keras-RL2. Using a GEM environment in which the agent controls the converter who converts the supply currents to the currents flowing into the motor for the SCIM, Isq, and Isd. For the SCIM, the discrete B6 bridge converter with six switches is utilised by default. This converter provides a total of eight possible actions. To do this, we have to analyse the whole code, how it works, and the parameters it uses in it. In this, the DQN agent is used to train on the results in which discrete action space is used. With a discrete action space, the agent decides which distinct action to perform from a finite action set. After this, we use the different algorithms to train on this motor environment to analyse the speed and current of the SCIM. So, our expected outcome is that we run this SCIM on different algorithms and analyse the results, such as DDPG, A2C, and PPO.

In all the algorithms in which we have to train our motor, we find different results from the different algorithms in which we have to look forward to seeing the efficiency, speed, and currents. And last, we have to compare the results of the algorithms to determine which one is better to control the drive of an induction motor and how we can improve it by adding some new techniques and modifying the one that works efficiently and provides benchmarking results.

Final Deliverable of the Project

Software System

Core Industry

Energy

Other Industries

Core Technology

Artificial Intelligence(AI)

Other Technologies

Sustainable Development Goals

Industry, Innovation and Infrastructure

Required Resources

Item Name Type No. of Units Per Unit Cost (in Rs) Total (in Rs)
Google Colab Equipment10197319730
GPU Nvidia rtx 1650 Equipment14500045000
Final Report Printing Miscellaneous 611506900
Mid Year Evaluation Report Equipment44001600
Total in (Rs) 73230
If you need this project, please contact me on contact@adikhanofficial.com
Autonomous Obstacle Avoidance with Monocular Percept

Autonomous navigtion of automobiles is a long standing problem. Navigation task ...

1675638330.png
Adil Khan
9 months ago
Design and Fabrication of Contactless Six DOF Robotic Arm

To develop a method for contact less processing of small delicate objects in industry.&nbs...

1675638330.png
Adil Khan
9 months ago
Towards a Socially Supportive Society (A Car Pooling and Resource Shar...

The goal of the system would help to gather divergent elements (individual etc.) With a pl...

1675638330.png
Adil Khan
9 months ago
University Timetable Management System

University Timetable Management System is on the go web application of automated system wh...

1675638330.png
Adil Khan
9 months ago
Alif

?If you want to please the GOD , please the creature of GOD? , We propose to build a uniqu...

1675638330.png
Adil Khan
9 months ago