TRANSCRIPTSUMMARIZATIONGenerating Summaries from Conversation Transcripts
Member of Staff

Member of Staff

6 min read

Generating Summaries from Conversation Transcripts

For Speech to Text (STT) practitioners, especially those working on conversational AI applications, the ability to create transcript summaries is crucial. It communicates the conversation's main points without the need to exhaustively read through an entire transcript. Reading the whole document can be painstaking as it often contains a lot of sections like niceties, chatter etc which may not be part of the core objectives. Here, an AI generated text summary can be useful, to help readers quickly understand the salient points of the conversation. Abstractive and extractive summaries are produced to make the conversations easier to read.

Automatic speech recognition (ASR) enables the comprehension of natural language and is used in the process of recording, analyzing, and translating human speech into text (STT). We shall study various Text Summarization techniques that we can apply once we obtain a transcript or even a text document in this article.

Text Summarization from Conversation Transcripts

Text summarization is a Natural Language Processing task with several applications when building conversational apps or working with transcripts. Text summarization can be broadly classified into two categories: abstractive and extractive summarization.

  • Extractive: Puts together important sentences, extracted from the original text doc, to create a summary. Easy, but restrictive.
  • Abstractive: Generate new sentences using natural language generation techniques that distil out the essence of the original text, just like a human would. Difficult, but flexible, and generates more human-like, readable summaries, and with greater comprehension power allowing it to summarize original text.

Text summarization systems have typically been based on extractive summarization, but with recent advances in NLP model architecture, more and more summarization systems are leveraging the higher readibality, and comprehension power of abstractive text summarization.

Key Challenges In Text Summarization:

  • Topic identification
  • Interpretation
  • Summary generation
  • Evaluation

Text Summarization Approaches

Extraction-based text Summarization

A subset of words, phrases, or sentences are chosen to construct a summary in extractive text summarization. The summary is made using a straightforward process in which the key phrases from the original text are extracted and concatenated.

The summary method involves neither rephrasing nor the usage of synonyms, making it simpler. To structure a particular sentence, the words are taken out exactly as they are and rearranged.

Key Tasks:

  • Identifying key phrases, storing and using them to select the document sentences for inclusion in the summary.
  • Sentence scoring in the text based on its representation.
  • Summary composing by selecting several sentences and putting them together.

Example:

Before Extraction-based Text Summarization

Alice and Ben boarded a train to travel to Mexico. While on the train, Ben forgot his files and thought of going back home.

After Extraction-based Text Summarization

Alice and Ben boarded a train to Mexico. Ben thought about going home.

Abstraction-based Text Summarization

Using a natural language-generation strategy, the abstractive text summarization method first builds an internal semantic representation of the given text.

Summarization that is based on abstraction is more difficult than that based on extraction. It tries to truly understand the semantics of the text, and rather than stringing together bits of the original text, attempts to use that understanding to generate shorter text containing the same information. This typically comes with the ability to specify how long the generated text should be, allow you to have some control in varying the verbosity of the generated summary, allowing the model to pick and choose important details or paraphrase details out in a human-like manner.

The ability of the underlying model to understand the semantics of the given text, relies heavily on recent advances in ML model architecture and techniques, and involves training on sufficiently relevant data that helps the model learn how to understand the conversation style and semantics.

Key Tasks:

  • Creating an internal representation of all the information provided in the text.
  • Generating novel text from this internal representation using natural language.

Example:

Before Abstraction-based Text Summarization

Alice and Ben boarded a train to travel to Mexico. While on the train, Ben forgot his files and thought of going back home.

After Abstraction-based Text Summarization

Alice and Ben both boarded their train to Mexico, but Ben then realized that he had forgotten his files and was considering heading home.

How to do Text Summarization?

Now that we know about two categories of text summarization, advanced models architectures such as BERT, T5, Pegasus, and GPT-J help in performing the same with the help of Natural Language Processing explained below:

Using BERT/Transformer Models

fine-tuning.png

Image Source.

BERT stands for Bidirectional Encoder Representations from Transformers, and is one of Google's language representation models. While pretrained model weights, trained on extremely large datasets, are released publicly by the likes of Google, Facebook etc., their ability to be fine-tuned on domain relevant data or a particular task revolutionized how we do NLP today. Fine-tuning requires much smaller, task or domain relevant datasets, requiring a fraction of the compute that pretraining does, allowing us to take advantage of the pretrained weights. This is also important, as without pretraining, the amount of compute required to train these models would be much higher, limiting individual's and startups' ability to compete in this space, while also putting a heavy toll on the environment via carbon emissions. It has a distinct unified architecture across different downstream tasks. The pretrained model can be fine-tuned for final tasks that might not be similar to the trained task model.

Using Large Language Models

encoder-decoder.png

Image Source.

Summaries can be generated thanks to prompt engineering and new advancements making waves in Language/NLP/NLU activities. Natural language processing (NLP) technology advances have accelerated in recent years, allowing solutions to complex problems with pre-trained models and generating a concise summary of a source text. Large Language models offer abstractive text summarization, enabling businesses to get summaries of high quality for any text length with a single request.

As a component of their processes or offerings, businesses capture and use textual data or documents to leverage Large Language Models. Organizational and customer issues can be resolved more effectively with high-quality, human-like textual summaries.

Large documents, when summarized by humans, require hours and hours of work, which would cost a lot of money, not to mention the fact that people summarizing the content need to learn or have knowledge about the domain they are working with. All of this can be automated by large language models, which can learn faster, summarize text more quickly, in a ridiculous fraction of the time it would otherwise take.

Benefits of Text Summarization for Businesses

From thousands of pages of documents, automatic text summarization can extract and summarize key information. It understands human language using natural language generation (NLG) and natural language processing (NLP). NLP makes it possible to read, edit, and summarize texts.

The following are some advantages that companies can gain from adopting text summarization software:

  • Reducing Time: In a matter of seconds, an automatic text summarizing program reads, modifies, and summarizes the content. With the advent of text summarization employing NLP technology, consumers now need to read less material and acquire the desired information quickly.
  • Saving Money: Using summarizing tools eliminates the need to hire outside consultants. Without the need for a translation, text summarizing software may work in any language or domain, and summarize texts in the majority of them without human interaction.
  • Generating Summaries of High Quality: High-quality summaries are generated by the NLP summary services for use in other publications or on your website.

To try out a cool example of this, check out semanticscholar.org, which provides TLDRs(Too Long; Didn't Read) for academic papers and texts. Developed by the Allen Institute for AI, a research institute in the space of AI, they trained their models on scientific papers, enabling it to automatically generate short concise summarize of all kinds of scientific papers. Imagine doing that by hand, and having to hire researchers from various fields with the ability to actually understand and summarize these papers.

Final Words

One of the helpful uses of machine learning and natural language processing is text summarization. It has recently gained significant popularity as a result of its adaptability, and ability to understand and summarize text. The two-text summarizing techniques discussed in this article are important tools for concisely and accurately condensing lengthy text documents.

Text summarization is one such feature that our Unified API offers, and which uses speech-to-text from a variety of providers and carries out downstream tasks like summarization, extracting questions and answers etc. We also have a powerful Transcript Editor, that allows you to visually select sections of meeting transcripts to summarize, among other useful features. Our API and toolkit can be used to rapidly build or integrate powerful conversational intelligence features into your application.

Related Blogs