Support for Interview Preparation with Deep Learning Based Language Model

Document Type

Conference Proceeding

Publication Date

8-25-2021

Abstract

Deep-learning based language models (LMs) have significantly enhanced services relating to language generation and classification. Our focus in this paper is on the Multiple Mini Interview (MMI) which is commonly used internationally by medical schools to screen applicants based on their ability to answer short questions in a considerate, professional manner. In this paper we establish the ability of LMs, specifically GPT-3, to generate MMI questions, simulate responses, and rate answers. We compare these simulated questions with their human generated counterparts and, after identifying the optimal hyperparameters, find that 92% of generated responses are capable of fooling humans. We also find that after identifying the optimal hyperparameters for question-answering, LMs are capable producing high quality simulated MMI responses, with an average human rating of 3.5 out of 5. Finally, GPT-3 is shown to have some agreement with human ratings, although it tends to overestimate the quality of the response. Conditional text generation by LMs alone seems to be able to significantly support MMI preparation.

This document is currently not available here.

Share

COinS