In this project, you will develop a software for the recruitment process in the human resources domain. Recruitment refers to matching the CVs of candidates and the advertisements of companies. A mechanism for computing effective matching scores will be designed using large language models.
In this project, first you will survey the methods used for explaining the outcomes of large language models. Then you will implement these methods in an application area related to natural language processing.
The project aims to learn neuro-symbolic operators that are effective in planning sequence of actions in direct and inverse tasks. The data obtained from human action observations as well as robot’s own action executions, will be used to learn associations between low-level sensorimotor observations and high-level symbolic representations. The extracted symbols should be useful in planning, verification and inversion, therefore the extraction algorithms will be biased considering their
effectiveness for such tasks.
In this project, a set of segmented skill demonstrations is hierarchically clustered to form a hierarchical task network exploited by the supervisory system for task execution and monitoring. The associated motion patterns and relevant task parameters are also learned. Training data are further exploited to build a generative model of the underlying probability distribution that provides a probabilistic representation of a template skill.
This project extracts parameterised skills from human-level specifications (e.g., CAD and MTM) and human demonstrations. For that, the following methodological steps will be taken. First, a palette of basic robotic skills will be constructed. Then, human demonstrations are recognized as a sequence of segmented skills. Thus, skills and their task relevant parameters are extracted; the parameterization enables fast generalisation of the skill to different use cases.
We will identify the various objects and elements in the human's surroundings and understand how they relate to the human's task, interaction, and overall experience. In order to allow robot learning from visual observation, human actions have to be “detected”. The terminology of detection incorporates estimating temporal boundaries and labels that we will detect from a third-person perspective human videos.