HuggingFace Transformers 4.6 : ノートブック : パイプラインの利用 (翻訳/解説)
翻訳 : (株)クラスキャット セールスインフォメーション
作成日時 : 06/14/2021 (4.6.1)
* 本ページは、HuggingFace Transformers の以下のドキュメントを翻訳した上で適宜、補足説明したものです:
- Notebooks : How to use Pipelines
* サンプルコードの動作確認はしておりますが、必要な場合には適宜、追加改変しています。
* ご自由にリンクを張って頂いてかまいませんが、sales-info@classcat.com までご一報いただけると嬉しいです。
スケジュールは弊社 公式 Web サイト でご確認頂けます。
- お住まいの地域に関係なく Web ブラウザからご参加頂けます。事前登録 が必要ですのでご注意ください。
- ウェビナー運用には弊社製品「ClassCat® Webinar」を利用しています。
人工知能研究開発支援 | 人工知能研修サービス | テレワーク & オンライン授業を支援 |
PoC(概念実証)を失敗させないための支援 (本支援はセミナーに参加しアンケートに回答した方を対象としています。) |
◆ お問合せ : 本件に関するお問い合わせ先は下記までお願いいたします。
株式会社クラスキャット セールス・マーケティング本部 セールス・インフォメーション |
E-Mail:sales-info@classcat.com ; WebSite: https://www.classcat.com/ ; Facebook |
ノートブック : パイプラインの利用
transformers v2.3.0 で新たに導入された、パイプライン は以下を含む様々な下流タスクに渡る推論を行なうための高位の、使いやすい、API を提供します :
- センテンス分類 (センチメント分析) : センテンス全体がポジティブかネガティブかを示します、つまり二値分類タスクかロジスティック回帰タスクです。
- トークン分類 (固有表現認識、品詞タギング) : 入力の各サブエンティティ (トークン) について、ラベルを割当てます、つまり分類タスクです。
- 質問応答 : タプル (質問, コンテキスト) が提供されたとき、モデルは質問に答えるコンテンツのテキストの範囲を見つける必要があります。
- マスク Filling : 提供されたコンテキストに関してマスクされた入力を埋める可能性のある単語を提示します。
- 要約 : 入力記事をより短い記事に要約します。
- 翻訳 : 入力をある言語から別の言語に翻訳します。
- 特徴抽出 : 入力をデータから学習された、高位の多次元空間にマップします。
パイプラインは総ての NLP プロセスのプロセス全体をカプセル化します :
- トークン化 : 初期入力を … プロパティを持つ複数のサブエンティティに分割します (i.e. トークン)。
- 推論 : 総てのトークンをより意味のある表現にマップします。
- デコード : 基礎となるタスクのために最終的な出力を生成 and/or 抽出するために上の表現を使用します。
API 全体が以下の構造を持つ pipeline() メソッドを通してエンドユーザに公開されます :
from transformers import pipeline
# Using default model and tokenizer for the task
pipeline("<task-name>")
# Using a user-specified model
pipeline("<task-name>", model="<model_name>")
# Using custom model/tokenizer as str
pipeline('<task-name>', model='<model name>', tokenizer='<tokenizer_name>')
!pip install -q transformers
from __future__ import print_function
import ipywidgets as widgets
from transformers import pipeline
1. センテンス分類 – センチメント分析
nlp_sentence_classif = pipeline('sentiment-analysis')
nlp_sentence_classif('Such a nice weather outside !')
[{'label': 'POSITIVE', 'score': 0.9997656}]
2. トークン分類 – 固有表現認識
nlp_token_class = pipeline('ner')
nlp_token_class('Hugging Face is a French company based in New-York.')
[{'entity': 'I-ORG', 'score': 0.9970937967300415, 'word': 'Hu'}, {'entity': 'I-ORG', 'score': 0.9345749020576477, 'word': '##gging'}, {'entity': 'I-ORG', 'score': 0.9787060022354126, 'word': 'Face'}, {'entity': 'I-MISC', 'score': 0.9981995820999146, 'word': 'French'}, {'entity': 'I-LOC', 'score': 0.9983047246932983, 'word': 'New'}, {'entity': 'I-LOC', 'score': 0.8913459181785583, 'word': '-'}, {'entity': 'I-LOC', 'score': 0.9979523420333862, 'word': 'York'}]
3. 質問応答
nlp_qa = pipeline('question-answering')
nlp_qa(context='Hugging Face is a French company based in New-York.', question='Where is based Hugging Face ?')
{'answer': 'New-York.', 'end': 50, 'score': 0.9632969241603995, 'start': 42}
4. テキスト生成 – マスク Filling
nlp_fill = pipeline('fill-mask')
nlp_fill('Hugging Face is a French company based in ' + nlp_fill.tokenizer.mask_token)
[{'score': 0.23106741905212402, 'sequence': '<s> Hugging Face is a French company based in Paris', 'token': 2201}, {'score': 0.08198167383670807, 'sequence': '<s> Hugging Face is a French company based in Lyon', 'token': 12790}, {'score': 0.04769487306475639, 'sequence': '<s> Hugging Face is a French company based in Geneva', 'token': 11559}, {'score': 0.04762246832251549, 'sequence': '<s> Hugging Face is a French company based in Brussels', 'token': 6497}, {'score': 0.041305847465991974, 'sequence': '<s> Hugging Face is a French company based in France', 'token': 1470}]
5. 要約
要約は現在 Bart と T5 でサポートされます。
TEXT_TO_SUMMARIZE = """
New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York.
A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband.
Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared "I do" five more times, sometimes only within two weeks of each other.
In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her "first and only" marriage.
Barrientos, now 39, is facing two criminal counts of "offering a false instrument for filing in the first degree," referring to her false statements on the
2010 marriage license application, according to court documents.
Prosecutors said the marriages were part of an immigration scam.
On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further.
After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective
Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002.
All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say.
Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages.
Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted.
The case was referred to the Bronx District Attorney\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\'s
Investigation Division. Seven of the men are from so-called "red-flagged" countries, including Egypt, Turkey, Georgia, Pakistan and Mali.
Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force.
If convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18.
"""
summarizer = pipeline('summarization')
summarizer(TEXT_TO_SUMMARIZE)
{'summary_text': 'Liana Barrientos has been married 10 times, sometimes within two weeks of each other. Prosecutors say the marriages were part of an immigration scam. She is believed to still be married to four men, and at one time, she was married to eight men at once. Her eighth husband was deported in 2006 to his native Pakistan.'}]
6. 翻訳
翻訳は言語マッピング – 英語-to-フランス語 (translation_en_to_fr)、英語-to-ドイツ語 (translation_en_to_de) そして英語-to-ルーマニア語 (translation_en_to_ro) のために T5 により現在サポートされています。
# English to French
translator = pipeline('translation_en_to_fr')
translator("HuggingFace is a French company that is based in New York City. HuggingFace's mission is to solve NLP one commit at a time")
[{'translation_text': 'HuggingFace est une entreprise française basée à New York et dont la mission est de résoudre les problèmes de NLP, un engagement à la fois.'}]
# English to German
translator = pipeline('translation_en_to_de')
translator("The history of natural language processing (NLP) generally started in the 1950s, although work can be found from earlier periods.")
[{'translation_text': 'Die Geschichte der natürlichen Sprachenverarbeitung (NLP) begann im Allgemeinen in den 1950er Jahren, obwohl die Arbeit aus früheren Zeiten zu finden ist.'}]
7. テキスト生成
テキスト生成は現在 GPT-2, OpenAi-GPT, TransfoXL, XLNet, CTRL と Reformer によりサポートされています。
text_generator = pipeline("text-generation")
text_generator("Today is a beautiful day and I will")
[{'generated_text': 'Today is a beautiful day and I will celebrate my birthday!"\n\nThe mother told CNN the two had planned their meal together. After dinner, she added that she and I walked down the street and stopped at a diner near her home. "He'}]
8. 射影 – 特徴抽出
import numpy as np
nlp_features = pipeline('feature-extraction')
output = nlp_features('Hugging Face is a French company based in Paris')
np.array(output).shape # (Samples, Tokens, Vector Size)
(1, 12, 768)
◆ Alright ! 今では transformers のパイプラインを通して何が可能かの良いイメージを持ち、そして今後のリリースでは更に多くのものが装備されます。
その間、貴方自身の入力で様々なパイプラインを試すことができます。
task = widgets.Dropdown(
options=['sentiment-analysis', 'ner', 'fill_mask'],
value='ner',
description='Task:',
disabled=False
)
input = widgets.Text(
value='',
placeholder='Enter something',
description='Your input:',
disabled=False
)
def forward(_):
if len(input.value) > 0:
if task.value == 'ner':
output = nlp_token_class(input.value)
elif task.value == 'sentiment-analysis':
output = nlp_sentence_classif(input.value)
else:
if input.value.find('') == -1:
output = nlp_fill(input.value + ' ')
else:
output = nlp_fill(input.value)
print(output)
input.on_submit(forward)
display(task, input)
Dropdown(description='Task:', index=1, options=('sentiment-analysis', 'ner', 'fill_mask'), value='ner') Text(value='', description='Your input:', placeholder='Enter something') [{'word': 'Peter', 'score': 0.9935821294784546, 'entity': 'I-PER'}, {'word': 'Pan', 'score': 0.9901397228240967, 'entity': 'I-PER'}, {'word': 'Marseille', 'score': 0.9984904527664185, 'entity': 'I-LOC'}, {'word': 'France', 'score': 0.9998687505722046, 'entity': 'I-LOC'}]
context = widgets.Textarea(
value='Einstein is famous for the general theory of relativity',
placeholder='Enter something',
description='Context:',
disabled=False
)
query = widgets.Text(
value='Why is Einstein famous for ?',
placeholder='Enter something',
description='Question:',
disabled=False
)
def forward(_):
if len(context.value) > 0 and len(query.value) > 0:
output = nlp_qa(question=query.value, context=context.value)
print(output)
query.on_submit(forward)
display(context, query)
Textarea(value='Einstein is famous for the general theory of relativity', description='Context:', placeholder=… Text(value='Why is Einstein famous for ?', description='Question:', placeholder='Enter something') {'score': 0.40340594113729367, 'start': 27, 'end': 54, 'answer': 'general theory of relativity'}
以上