The latest language models enable automatic generation of convincing text fragments. In our demo we use this as an interactive writing aid.
Start the demo, type some text and press the tab key for automatically generated suggestions for the continuation of your essay.
The suggestions depend on the words which are already in the document. It doesn’t matter if these were typed by yourself or if they were previously accepted suggestions.
Don’t forget to press tab again if you don’t like the suggested texts.
The language model used in this demo is based on GPT2. This model is based on a neural network consisting of so-called transformers. These structures consist of an encoder and a decoder. The encoder translates the entered text into “tokens” (parts of a word) that are encoded by the neural network. New text are generated with the decoder.
This model was trained on a corpus of 8 million documents and has created an internal model of the English language, without any human intervention. The mechanism used to learn the model is called “causal language modeling”. This means that the model is optimized to predict the best-fitting next word given a series of words.
Practical applications of this kind of model are made by “fine tuning”. This is an extra training step in which specific texts are offered. This allows the model to be taught to write songs or computer programs, for example. But when a company’s external communication is used as training data, it could also help to quickly answer incoming correspondence.