A Benchmark Model for Language Models towards Increased Transparency

Main Article Content

AyseKok Arslan

Keywords

Abstract

One of the mostly advanced AI technologies in recent year has been language models (LM) which necessitate a comparison or benchmark among many LM to enhance transparency of these models. The purpose of this study is to provide a fuller characterization of LMs rather than to focus on a specific aspect in order to increase societal impact. After a brief overview of the constituents of a benchmark and features of transparency, this study explores main aspects of a model - scenario, adaptation, metric- required to provide a roadmap for how to evaluate language models. Given the lack of studies in the field it is a step towards the design of more sophisticated models and aims to raise awareness of the importance of developing benchmarks for AI models.


 

Downloads

Download data is not yet available.
Abstract 140 | PDF Downloads 80