PyTorch 1.1 Tutorials : テキスト : チャットボット

PyTorch 1.1 Tutorials : テキスト : チャットボット (翻訳/解説)
翻訳 : (株)クラスキャット セールスインフォメーション
作成日時 : 07/23/2019 (1.1.0)

* 本ページは、PyTorch 1.1 Tutorials : Text : CHATBOT TUTORIAL を翻訳した上で適宜、補足説明したものです:

* サンプルコードの動作確認はしておりますが、必要な場合には適宜、追加改変しています。
* ご自由にリンクを張って頂いてかまいませんが、sales-info@classcat.com までご一報いただけると嬉しいです。

 

 

Tutorials : テキスト : チャットボット

このチュートリアルでは、リカレント sequence-to-sequence モデルの楽しい興味深いユースケースを探検します。Cornell Movie-Dialogs コーパス から映画スクリプトを使用して単純なチャットボットを訓練します。

対話モデルは人工知能研究のホットなトピックです。チャットボットはカスタマーサービス・アプリケーションとオンライン・ヘルプデスクを含む様々な環境で見つかります。これらのボットはしばしば retrieval-based モデルにより供給されます、これはある形式の質問へ事前定義された応答を出力します。会社の IT ヘルプデスクのように非常に制限されたドメインでは、これらのモデルは十分かもしれません、けれども、それらはより一般的なユースケースでは十分に強固ではありません。マルチドメインで人間と意味がある会話を成すことを機械に教えることは解決からは程遠いリサーチ・クエスチョンです。最近では、深層学習ブームが Google の Neural Conversational モデル のようなパワフルな生成モデルを可能にしています、これはマルチドメイン生成対話モデルに対して大きな一歩を踏み出しています。このチュートリアルでは、この種類のモデルを PyTorch で実装します。

 

> hello?
Bot: hello .
> where am I?
Bot: you re in a hospital .
> who are you?
Bot: i m a lawyer .
> how are you doing?
Bot: i m fine .
> are you my friend?
Bot: no .
> you're under arrest
Bot: i m trying to help you !
> i'm just kidding
Bot: i m sorry .
> where are you from?
Bot: san francisco .
> it's time for me to leave
Bot: i know .
> goodbye
Bot: goodbye .

チュートリアルのハイライト

  • Cornell Movie-Dialogs コーパス データセットのロードと前処理を扱う
  • Luong attention メカニズム で sequence-to-sequence モデルを実装する。
  • ミニバッチを使用してエンコーダとデコーダ・モデルを一緒に訓練する。
  • greedy-search デコーディング・モジュールを実装する。
  • 訓練されたチャットボットと相互作用する。

Acknowledgements

このチュートリアルは次のソースからコードを拝借しています :

  1. Yuan-Kuei Wu の pytorch-chatbot 実装: https://github.com/ywk991112/pytorch-chatbot
  2. Sean Robertson の practical-pytorch seq2seq-translation サンプル: https://github.com/spro/practical-pytorch/tree/master/seq2seq-translation
  3. FloydHub の Cornell Movie Corpus 前処理コード: https://github.com/floydhub/textutil-preprocess-cornell-movie-corpus

 

準備

始まるために、データ ZIP ファイルを ここ でダウンロードして現在のディレクトリ下の data/ ディレクトリに起きます。

その後、幾つかの必要なものをインポートします。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals

import torch
from torch.jit import script, trace
import torch.nn as nn
from torch import optim
import torch.nn.functional as F
import csv
import random
import re
import os
import unicodedata
import codecs
from io import open
import itertools
import math


USE_CUDA = torch.cuda.is_available()
device = torch.device("cuda" if USE_CUDA else "cpu")

 

データのロード & 前処理

次のステップはデータファイルを再フォーマットしてデータを作業可能な構造内にロードすることです。

Cornell Movie-Dialogs コーパス は映画キャラクターの対話の豊富なデータセットです :

  • 映画キャラクターの 10,292 ペアの間の 220,579 会話交換
  • 617 映画から 9,035 キャラクター
  • 304,713 総計発話

データセットは巨大で多様です、そして言語形式、期間、センチメント等の素晴らしい変種があります。多様性が私達のモデルを多くの入力と質問の形式に強固にすることを望みます。

最初に、元のフォーマットを見るためにデータファイルの数行を見ましょう。

corpus_name = "cornell movie-dialogs corpus"
corpus = os.path.join("data", corpus_name)

def printLines(file, n=10):
    with open(file, 'rb') as datafile:
        lines = datafile.readlines()
    for line in lines[:n]:
        print(line)

printLines(os.path.join(corpus, "movie_lines.txt"))
b'L1045 +++$+++ u0 +++$+++ m0 +++$+++ BIANCA +++$+++ They do not!\n'
b'L1044 +++$+++ u2 +++$+++ m0 +++$+++ CAMERON +++$+++ They do to!\n'
b'L985 +++$+++ u0 +++$+++ m0 +++$+++ BIANCA +++$+++ I hope so.\n'
b'L984 +++$+++ u2 +++$+++ m0 +++$+++ CAMERON +++$+++ She okay?\n'
b"L925 +++$+++ u0 +++$+++ m0 +++$+++ BIANCA +++$+++ Let's go.\n"
b'L924 +++$+++ u2 +++$+++ m0 +++$+++ CAMERON +++$+++ Wow\n'
b"L872 +++$+++ u0 +++$+++ m0 +++$+++ BIANCA +++$+++ Okay -- you're gonna need to learn how to lie.\n"
b'L871 +++$+++ u2 +++$+++ m0 +++$+++ CAMERON +++$+++ No\n'
b'L870 +++$+++ u0 +++$+++ m0 +++$+++ BIANCA +++$+++ I\'m kidding.  You know how sometimes you just become this "persona"?  And you don\'t know how to quit?\n'
b'L869 +++$+++ u0 +++$+++ m0 +++$+++ BIANCA +++$+++ Like my fear of wearing pastels?\n'

 

フォーマットされたデータファイルを作成する

便利のために、上手くフォーマットされたデータファイルを作成します、そこでは各行はタブ区切りな質問文と応答文のペアを含みます。

次の関数は生の movie_lines.txt データファイルのパーシングを容易にします。

  • loadLines はファイルの各行をフィールド (lineID, characterID, movieID, character, text) の辞書に分けます。
  • loadConversations は loadLines からの行のフィールドをmovie_conversations.txt に基づく会話へグループ分けします。
  • extractSentencePairs は会話からセンテンスのペアを抽出します。
# Splits each line of the file into a dictionary of fields
def loadLines(fileName, fields):
    lines = {}
    with open(fileName, 'r', encoding='iso-8859-1') as f:
        for line in f:
            values = line.split(" +++$+++ ")
            # Extract fields
            lineObj = {}
            for i, field in enumerate(fields):
                lineObj[field] = values[i]
            lines[lineObj['lineID']] = lineObj
    return lines


# Groups fields of lines from `loadLines` into conversations based on *movie_conversations.txt*
def loadConversations(fileName, lines, fields):
    conversations = []
    with open(fileName, 'r', encoding='iso-8859-1') as f:
        for line in f:
            values = line.split(" +++$+++ ")
            # Extract fields
            convObj = {}
            for i, field in enumerate(fields):
                convObj[field] = values[i]
            # Convert string to list (convObj["utteranceIDs"] == "['L598485', 'L598486', ...]")
            lineIds = eval(convObj["utteranceIDs"])
            # Reassemble lines
            convObj["lines"] = []
            for lineId in lineIds:
                convObj["lines"].append(lines[lineId])
            conversations.append(convObj)
    return conversations


# Extracts pairs of sentences from conversations
def extractSentencePairs(conversations):
    qa_pairs = []
    for conversation in conversations:
        # Iterate over all the lines of the conversation
        for i in range(len(conversation["lines"]) - 1):  # We ignore the last line (no answer for it)
            inputLine = conversation["lines"][i]["text"].strip()
            targetLine = conversation["lines"][i+1]["text"].strip()
            # Filter wrong samples (if one of the lists is empty)
            if inputLine and targetLine:
                qa_pairs.append([inputLine, targetLine])
    return qa_pairs

今はこれらの関数を呼び出してファイルを作成します。それを formatted_movie_lines.txt と呼称します。

# Define path to new file
datafile = os.path.join(corpus, "formatted_movie_lines.txt")

delimiter = '\t'
# Unescape the delimiter
delimiter = str(codecs.decode(delimiter, "unicode_escape"))

# Initialize lines dict, conversations list, and field ids
lines = {}
conversations = []
MOVIE_LINES_FIELDS = ["lineID", "characterID", "movieID", "character", "text"]
MOVIE_CONVERSATIONS_FIELDS = ["character1ID", "character2ID", "movieID", "utteranceIDs"]

# Load lines and process conversations
print("\nProcessing corpus...")
lines = loadLines(os.path.join(corpus, "movie_lines.txt"), MOVIE_LINES_FIELDS)
print("\nLoading conversations...")
conversations = loadConversations(os.path.join(corpus, "movie_conversations.txt"),
                                  lines, MOVIE_CONVERSATIONS_FIELDS)

# Write new csv file
print("\nWriting newly formatted file...")
with open(datafile, 'w', encoding='utf-8') as outputfile:
    writer = csv.writer(outputfile, delimiter=delimiter, lineterminator='\n')
    for pair in extractSentencePairs(conversations):
        writer.writerow(pair)

# Print a sample of lines
print("\nSample lines from file:")
printLines(datafile)
Processing corpus...

Loading conversations...

Writing newly formatted file...

Sample lines from file:
b"Can we make this quick?  Roxanne Korrine and Andrew Barrett are having an incredibly horrendous public break- up on the quad.  Again.\tWell, I thought we'd start with pronunciation, if that's okay with you.\n"
b"Well, I thought we'd start with pronunciation, if that's okay with you.\tNot the hacking and gagging and spitting part.  Please.\n"
b"Not the hacking and gagging and spitting part.  Please.\tOkay... then how 'bout we try out some French cuisine.  Saturday?  Night?\n"
b"You're asking me out.  That's so cute. What's your name again?\tForget it.\n"
b"No, no, it's my fault -- we didn't have a proper introduction ---\tCameron.\n"
b"Cameron.\tThe thing is, Cameron -- I'm at the mercy of a particularly hideous breed of loser.  My sister.  I can't date until she does.\n"
b"The thing is, Cameron -- I'm at the mercy of a particularly hideous breed of loser.  My sister.  I can't date until she does.\tSeems like she could get a date easy enough...\n"
b'Why?\tUnsolved mystery.  She used to be really popular when she started high school, then it was just like she got sick of it or something.\n'
b"Unsolved mystery.  She used to be really popular when she started high school, then it was just like she got sick of it or something.\tThat's a shame.\n"
b'Gosh, if only we could find Kat a boyfriend...\tLet me see what I can do.\n'

 

データをロードしてトリムする

次の仕事は語彙を作成して質問/応答センテンスのペアをメモリにロードすることです。私達は 単語 のシークエンスを扱っていて、これは離散数値空間への暗黙的なマッピングを持たないことに注意してください。そのため、データセットで遭遇する各一意な単語をインデックス値にマッピングすることにより一つを作成しなければなりません。

このため、Voc クラスを定義します、これは単語からインデックスへのマッピング、インデックスから単語への逆のマッピング、各単語のカウントそして合計の単語カウントを保持します。このクラスは単語を語彙に追加する (addWord)、センテンスの総ての単語を追加する (addSentence) そして稀に見る単語をトリムする (trim) ためのメソッドを提供します。トリミングについては更に後で。

# Default word tokens
PAD_token = 0  # Used for padding short sentences
SOS_token = 1  # Start-of-sentence token
EOS_token = 2  # End-of-sentence token

class Voc:
    def __init__(self, name):
        self.name = name
        self.trimmed = False
        self.word2index = {}
        self.word2count = {}
        self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"}
        self.num_words = 3  # Count SOS, EOS, PAD

    def addSentence(self, sentence):
        for word in sentence.split(' '):
            self.addWord(word)

    def addWord(self, word):
        if word not in self.word2index:
            self.word2index[word] = self.num_words
            self.word2count[word] = 1
            self.index2word[self.num_words] = word
            self.num_words += 1
        else:
            self.word2count[word] += 1

    # Remove words below a certain count threshold
    def trim(self, min_count):
        if self.trimmed:
            return
        self.trimmed = True

        keep_words = []

        for k, v in self.word2count.items():
            if v >= min_count:
                keep_words.append(k)

        print('keep_words {} / {} = {:.4f}'.format(
            len(keep_words), len(self.word2index), len(keep_words) / len(self.word2index)
        ))

        # Reinitialize dictionaries
        self.word2index = {}
        self.word2count = {}
        self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"}
        self.num_words = 3 # Count default tokens

        for word in keep_words:
            self.addWord(word)

今では語彙と質問/応答センテンスのペアを集めることができます。このデータを使用する準備ができる前に、幾つかの前処理を遂行しなければなりません。

最初に、unicodeToAscii を使用して Unicode 文字列を ASCII に変換しなければなりません。次に総ての文字を小文字に変換して基本的な句読点を除いて非英字文字をトリムするべきです (normalizeString)。最後に、訓練の収束を助けるため、MAX_LENGTH しきい値よりも大きい長さのセンテンスをフィルターして除きます (filterPairs)。

MAX_LENGTH = 10  # Maximum sentence length to consider

# Turn a Unicode string to plain ASCII, thanks to
# https://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
    return ''.join(
        c for c in unicodedata.normalize('NFD', s)
        if unicodedata.category(c) != 'Mn'
    )

# Lowercase, trim, and remove non-letter characters
def normalizeString(s):
    s = unicodeToAscii(s.lower().strip())
    s = re.sub(r"([.!?])", r" \1", s)
    s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
    s = re.sub(r"\s+", r" ", s).strip()
    return s

# Read query/response pairs and return a voc object
def readVocs(datafile, corpus_name):
    print("Reading lines...")
    # Read the file and split into lines
    lines = open(datafile, encoding='utf-8').\
        read().strip().split('\n')
    # Split every line into pairs and normalize
    pairs = [[normalizeString(s) for s in l.split('\t')] for l in lines]
    voc = Voc(corpus_name)
    return voc, pairs

# Returns True iff both sentences in a pair 'p' are under the MAX_LENGTH threshold
def filterPair(p):
    # Input sequences need to preserve the last word for EOS token
    return len(p[0].split(' ')) < MAX_LENGTH and len(p[1].split(' ')) < MAX_LENGTH

# Filter pairs using filterPair condition
def filterPairs(pairs):
    return [pair for pair in pairs if filterPair(pair)]

# Using the functions defined above, return a populated voc object and pairs list
def loadPrepareData(corpus, corpus_name, datafile, save_dir):
    print("Start preparing training data ...")
    voc, pairs = readVocs(datafile, corpus_name)
    print("Read {!s} sentence pairs".format(len(pairs)))
    pairs = filterPairs(pairs)
    print("Trimmed to {!s} sentence pairs".format(len(pairs)))
    print("Counting words...")
    for pair in pairs:
        voc.addSentence(pair[0])
        voc.addSentence(pair[1])
    print("Counted words:", voc.num_words)
    return voc, pairs


# Load/Assemble voc and pairs
save_dir = os.path.join("data", "save")
voc, pairs = loadPrepareData(corpus, corpus_name, datafile, save_dir)
# Print some pairs to validate
print("\npairs:")
for pair in pairs[:10]:
    print(pair)
Start preparing training data ...
Reading lines...
Read 221282 sentence pairs
Trimmed to 64271 sentence pairs
Counting words...
Counted words: 18008

pairs:
['there .', 'where ?']
['you have my word . as a gentleman', 'you re sweet .']
['hi .', 'looks like things worked out tonight huh ?']
['you know chastity ?', 'i believe we share an art instructor']
['have fun tonight ?', 'tons']
['well no . . .', 'then that s all you had to say .']
['then that s all you had to say .', 'but']
['but', 'you always been this selfish ?']
['do you listen to this crap ?', 'what crap ?']
['what good stuff ?', 'the real you .']

訓練の間のより速い収束を獲得するために有益なもう一つの戦術は語彙から滅多に使用されない単語をトリミングすることです。特徴空間を縮小することはモデルが近似することを学習しなければならない関数の困難さをまた和らげます。これを 2 ステップのプロセスとして行ないます :

  1. MIN_COUNT しきい値未満に使用される単語を voc.trim 関数を使用してトリムします。
  2. トリムされた単語を伴うペアをフィルタします。
MIN_COUNT = 3    # Minimum word count threshold for trimming

def trimRareWords(voc, pairs, MIN_COUNT):
    # Trim words used under the MIN_COUNT from the voc
    voc.trim(MIN_COUNT)
    # Filter out pairs with trimmed words
    keep_pairs = []
    for pair in pairs:
        input_sentence = pair[0]
        output_sentence = pair[1]
        keep_input = True
        keep_output = True
        # Check input sentence
        for word in input_sentence.split(' '):
            if word not in voc.word2index:
                keep_input = False
                break
        # Check output sentence
        for word in output_sentence.split(' '):
            if word not in voc.word2index:
                keep_output = False
                break

        # Only keep pairs that do not contain trimmed word(s) in their input or output sentence
        if keep_input and keep_output:
            keep_pairs.append(pair)

    print("Trimmed from {} pairs to {}, {:.4f} of total".format(len(pairs), len(keep_pairs), len(keep_pairs) / len(pairs)))
    return keep_pairs


# Trim voc and pairs
pairs = trimRareWords(voc, pairs, MIN_COUNT)
keep_words 7823 / 18005 = 0.4345
Trimmed from 64271 pairs to 53165, 0.8272 of total

 

モデルのためのデータを準備する

私達のデータを素晴らしい語彙とセンテンス・ペアのリストに準備して揉みほぐす (= massage) ためにかなりの努力をしましたが、モデルは究極的には入力として数値の torch tensor を想定しています。モデルのために処理されたデータを準備する一つの方法は seq2seq 翻訳チュートリアル で見つけられます。そのチュートリアルで、1 のバッチサイズを使用します、これは私達が行わなければならない総てのことはセンテンスペアの単語を語彙から対応するインデックスに変換してこれをモデルに供給することです。

けれども、もし貴方が訓練を高速化することに関心がある and/or GPU 並列化機能を活用したいのであれば、ミニバッチで訓練する必要があります。

ミニバッチの使用はまたバッチのセンテンス長のバリエーションに留意する必要があることを意味します。同じバッチの異なるサイズのセンテンスを調整するには、shape (max_length, batch_size) のバッチ化された入力 tensor を作成します、ここでは max_length より短いセンテンスは EOS_token の後でゼロパディングされます。

もし英語センテンスを単語をそれらのインデックスに変換 (indexesFromSentence) して tensor に単純に変換してゼロパディングする場合、tensor は shape (batch_size, max_length) を持ちそして最初の次元へのインデックスは総ての時間ステップに渡る full シークエンスを返すでしょう。けれども、時間に沿って、そしてバッチの総てのシークエンスに渡りバッチをインデックスできる必要があります。それ故に、入力バッチ shape を (max_length, batch_size) に転置します、その結果最初の次元に渡るインデックスはバッチの総てのセンテンスに渡る時間ステップを返します。この転置を zeroPadding 関数内で暗黙的に処理します。

inputVar 関数はセンテンスを tensor に変換するプロセスを処理します、最終的には正しく shape されたゼロパディングされた tensor を作成します。それはまたバッチのシークエンスの各々のための長さの tensor を返します、これは後でデコーダに渡されます。

outputVar 関数は inputVar に類似した関数を遂行しますが、それは長さの tensor を返す代わりに、二値マスク tensor と最大ターゲット・センテンス長を返します。二値マスク tensor は出力ターゲット tensor と同じ shape を持ちますが、総ての要素は PAD_token は 0 で他の総ては 1 です。

batch2TrainData は単純に多くのペアを取り前述の関数を使用して入力とターゲット tensor を返します。

def indexesFromSentence(voc, sentence):
    return [voc.word2index[word] for word in sentence.split(' ')] + [EOS_token]


def zeroPadding(l, fillvalue=PAD_token):
    return list(itertools.zip_longest(*l, fillvalue=fillvalue))

def binaryMatrix(l, value=PAD_token):
    m = []
    for i, seq in enumerate(l):
        m.append([])
        for token in seq:
            if token == PAD_token:
                m[i].append(0)
            else:
                m[i].append(1)
    return m

# Returns padded input sequence tensor and lengths
def inputVar(l, voc):
    indexes_batch = [indexesFromSentence(voc, sentence) for sentence in l]
    lengths = torch.tensor([len(indexes) for indexes in indexes_batch])
    padList = zeroPadding(indexes_batch)
    padVar = torch.LongTensor(padList)
    return padVar, lengths

# Returns padded target sequence tensor, padding mask, and max target length
def outputVar(l, voc):
    indexes_batch = [indexesFromSentence(voc, sentence) for sentence in l]
    max_target_len = max([len(indexes) for indexes in indexes_batch])
    padList = zeroPadding(indexes_batch)
    mask = binaryMatrix(padList)
    mask = torch.ByteTensor(mask)
    padVar = torch.LongTensor(padList)
    return padVar, mask, max_target_len

# Returns all items for a given batch of pairs
def batch2TrainData(voc, pair_batch):
    pair_batch.sort(key=lambda x: len(x[0].split(" ")), reverse=True)
    input_batch, output_batch = [], []
    for pair in pair_batch:
        input_batch.append(pair[0])
        output_batch.append(pair[1])
    inp, lengths = inputVar(input_batch, voc)
    output, mask, max_target_len = outputVar(output_batch, voc)
    return inp, lengths, output, mask, max_target_len


# Example for validation
small_batch_size = 5
batches = batch2TrainData(voc, [random.choice(pairs) for _ in range(small_batch_size)])
input_variable, lengths, target_variable, mask, max_target_len = batches

print("input_variable:", input_variable)
print("lengths:", lengths)
print("target_variable:", target_variable)
print("mask:", mask)
print("max_target_len:", max_target_len)
input_variable: tensor([[  25, 3048,    7,  115,  625],
        [ 359,  571,   74,   76,  367],
        [   4,   66,  742,  174,    4],
        [ 122,   25,  604,    6,    2],
        [  95,  200,  174,    2,    0],
        [   7,  177,    6,    0,    0],
        [2082,   53,    2,    0,    0],
        [3668, 1104,    0,    0,    0],
        [   6,    4,    0,    0,    0],
        [   2,    2,    0,    0,    0]])
lengths: tensor([10, 10,  7,  5,  4])
target_variable: tensor([[   7, 1264,   33, 3577,   51],
        [ 379,    4,   76,    4,  109],
        [  41,   25,  102,    2, 3065],
        [  36,  200,   29,    0,    4],
        [   4,  123, 1086,    0,    2],
        [   2,   40,    4,    0,    0],
        [   0,  158,    2,    0,    0],
        [   0,  467,    0,    0,    0],
        [   0,    4,    0,    0,    0],
        [   0,    2,    0,    0,    0]])
mask: tensor([[1, 1, 1, 1, 1],
        [1, 1, 1, 1, 1],
        [1, 1, 1, 1, 1],
        [1, 1, 1, 0, 1],
        [1, 1, 1, 0, 1],
        [1, 1, 1, 0, 0],
        [0, 1, 1, 0, 0],
        [0, 1, 0, 0, 0],
        [0, 1, 0, 0, 0],
        [0, 1, 0, 0, 0]], dtype=torch.uint8)
max_target_len: 10

 

モデルを定義する

Seq2Seq モデル

私達のチャットボットの頭脳は sequence-to-sequence (seq2seq) モデルです。seq2seq モデルの目標は入力として可変長シークエンスを取り、固定サイズのモデルを使用して出力として可変長シークエンスを返すことです。

Sutskever et al. は 2 つの別個のリカレント・ニューラルネットを一緒に使用することにより、このタスクを達成できることを発見しました。一つの RNN は エンコーダ として機能します、これは可変長入力シークエンスを固定長コンテキスト・ベクトルにエンコードします。理論的には、このコンテキスト・ベクトルはボットへの入力である質問センテンスについての意味的情報を含みます。2 番目の RNN は デコーダ です、これは入力単語とコンテキスト・ベクトルを取り、センテンスの次の単語のための推測と次の反復で使用する隠れ状態を返します。

 

エンコーダ

エンコーダ RNN は入力センテンスを一度に一つのトークン (e.g. 単語) を通して反復し、各時間ステップで「出力」ベクトルと「隠れ状態」ベクトルを出力します。それから隠れ状態ベクトルは次の時間ステップに渡されます、その一方で出力ベクトルは記録されます。エンコーダはシークエンスの各ポイントでそれが見たコンテキストを高次元空間のポイントのセットに変換します、これはデコーダが与えられたタスクのための意味のある出力を生成するために使用します。

エンコーダの中心部は Cho et al. in 2014 により考案された多層 Gated Recurrent Unit です。GRU の双方向変種を使用します、これは本質的には2 つの独立した RNN があることを意味します :一つは入力シークエンスが通常のシーケンシャル順で供給され、そして一つは入力シークエンスが逆順で供給されます。各ネットワークの出力は各時間ステップで合計されます。双方向 GRU の使用は過去と未来のコンテキストの両者をエンコードする優位を与えてくれます。

Bidirectional RNN:

 
埋め込み層は単語インデックスを任意のサイズの特徴空間にエンコードするために使用されることに注意してください。私達のモデルのために、この層は各単語をサイズ hidden_size の特徴空間にマップします。訓練されるとき、これらの値は類似の意味の単語間の意味的類似性をエンコードするはずです。

最後に、シークエンスのパッドされたバッチを RNN モジュールに渡す場合、nn.utils.rnn.pack_padded_sequence と nn.utils.rnn.pad_packed_sequence をそれぞれ使用して RNN パス回りでパディングをパックとアンパックしなければなりません。

計算グラフ :

  1. 単語インデックスを埋め込みに変換する。
  2. RNN モジュールのためにシークエンスのパッドされたバッチをパックする
  3. GRU を通した forward パス。
  4. パディングをアンパックする。
  5. 双方向 GRU 出力を合計する。
  6. 出力と最終的な隠れ状態を返します。

入力 :

  • input_seq: 入力センテンスのバッチ; shape=(max_length, batch_size)
  • input_lengths: バッチの各センテンスに対応するセンテンス長のリスト; shape=(batch_size)
  • hidden: 隠れ状態; shape=(n_layers x num_directions, batch_size, hidden_size)

出力 :

  • outputs: GRU の最後の隠れ層からの出力特徴 (双方向出力の総計); shape=(max_length, batch_size, hidden_size)
  • hidden: GRU からの更新された隠れ状態; shape=(n_layers x num_directions, batch_size, hidden_size)
class EncoderRNN(nn.Module):
    def __init__(self, hidden_size, embedding, n_layers=1, dropout=0):
        super(EncoderRNN, self).__init__()
        self.n_layers = n_layers
        self.hidden_size = hidden_size
        self.embedding = embedding

        # Initialize GRU; the input_size and hidden_size params are both set to 'hidden_size'
        #   because our input size is a word embedding with number of features == hidden_size
        self.gru = nn.GRU(hidden_size, hidden_size, n_layers,
                          dropout=(0 if n_layers == 1 else dropout), bidirectional=True)

    def forward(self, input_seq, input_lengths, hidden=None):
        # Convert word indexes to embeddings
        embedded = self.embedding(input_seq)
        # Pack padded batch of sequences for RNN module
        packed = nn.utils.rnn.pack_padded_sequence(embedded, input_lengths)
        # Forward pass through GRU
        outputs, hidden = self.gru(packed, hidden)
        # Unpack padding
        outputs, _ = nn.utils.rnn.pad_packed_sequence(outputs)
        # Sum bidirectional GRU outputs
        outputs = outputs[:, :, :self.hidden_size] + outputs[:, : ,self.hidden_size:]
        # Return output and final hidden state
        return outputs, hidden

 

デコーダ

デコーダ RNN はトークン毎流儀で応答センテンスを生成します。それはシークエンスの次の単語を生成するためにエンコーダのコンテキスト・ベクトル、そして内部隠れ状態を使用します。それはそれがセンテンスの終わりを表わす EOS_token を出力するまで単語を生成し続けます。vanilla seq2seq デコーダに伴う一般的な問題はもし入力シークエンス全体の意味をエンコードするためにコンテキスト・ベクトルにだけ依拠するするのであれば情報損失を伴いがちであることです。特に長い入力シークエンスを処理するとき、デコーダの能力を非常に制限することは事実です。

これに対処するために、Bahdanau et al. は、総てのステップで固定されたコンテキスト全体を利用するのではなく、デコーダに入力シークエンスの特定の部分に注意を払うことを可能にする「attention メカニズム」を作成しました。

高位では、attention はデコーダの現在の隠れ状態とエンコーダの出力を使用して計算されます。出力 attention 重みは入力シークエンスと同じ shape を持ち、それらをエンコーダ出力で乗算することを可能にし、注意すべきエンコーダ出力の部分を示す重み付けられた総計を与えてくれます。Sean Robertson の図はこれを非常に上手く説明します :

Luong et al. は Bahdanau et al. の基盤を "グローバル attention" を作成することにより改良しました。主な違いは "グローバル attention" では、現在の時間ステップからのエンコーダの隠れ状態だけを考慮する、 Bahdanau et al の "ローカル attention" とは対照的に、エンコーダの隠れ状態の総てを考慮します。もう一つの他の違いは "グローバル attention" では、attention 重み、あるいはエネルギーを現在の時間ステップだけからデコーダの隠れ状態を使用して計算することです。Bahdanau et al. の attention 計算は前の時間ステップからのデコーダの状態の知識を必要とします。また、Luong et al. は「スコア関数」と呼ばれる、エンコーダ出力とデコーダ出力の間の attention エネルギーを計算するための様々な方法を提供します。

ここで $h_t$ = 現在のターゲット・デコーダ状態そして $\bar{h}_s$ = 総てのエンコーダ状態です。

全体として、グローバル attention メカニズムは次の図により要約できます。"Attention 層を" Attn と呼ばれる個別の nn.Module として実装することに注意してください。このモジュールの出力は softmax で正規化された shape (batch_size, 1, max_length) の重み tensor です。

# Luong attention layer
class Attn(nn.Module):
    def __init__(self, method, hidden_size):
        super(Attn, self).__init__()
        self.method = method
        if self.method not in ['dot', 'general', 'concat']:
            raise ValueError(self.method, "is not an appropriate attention method.")
        self.hidden_size = hidden_size
        if self.method == 'general':
            self.attn = nn.Linear(self.hidden_size, hidden_size)
        elif self.method == 'concat':
            self.attn = nn.Linear(self.hidden_size * 2, hidden_size)
            self.v = nn.Parameter(torch.FloatTensor(hidden_size))

    def dot_score(self, hidden, encoder_output):
        return torch.sum(hidden * encoder_output, dim=2)

    def general_score(self, hidden, encoder_output):
        energy = self.attn(encoder_output)
        return torch.sum(hidden * energy, dim=2)

    def concat_score(self, hidden, encoder_output):
        energy = self.attn(torch.cat((hidden.expand(encoder_output.size(0), -1, -1), encoder_output), 2)).tanh()
        return torch.sum(self.v * energy, dim=2)

    def forward(self, hidden, encoder_outputs):
        # Calculate the attention weights (energies) based on the given method
        if self.method == 'general':
            attn_energies = self.general_score(hidden, encoder_outputs)
        elif self.method == 'concat':
            attn_energies = self.concat_score(hidden, encoder_outputs)
        elif self.method == 'dot':
            attn_energies = self.dot_score(hidden, encoder_outputs)

        # Transpose max_length and batch_size dimensions
        attn_energies = attn_energies.t()

        # Return the softmax normalized probability scores (with added dimension)
        return F.softmax(attn_energies, dim=1).unsqueeze(1)

attention サブモジュールを定義した今、実際のデコーダモデルを実装できます。デコーダに対しては、バッチを一度に 1 時間ステップ手動で供給します。これは埋め込み単語 tensor と GRU 出力は両者とも shape (1, batch_size, hidden_size) を持つことを意味します。

計算グラフ :

  1. 現在の入力単語の埋め込みを得る。
  2. unidirectional GRU を通して forward。
  3. (2) からの現在の GRU 主力からの attention 重みを計算する。
  4. 新しい「重み総計」コンテキスト・ベクトルを得るためにエンコーダ出力への attention 重みを乗算する。
  5. Luong eq. 5 を使用して重み付けられたコンテキストと GRU 出力を結合する。
  6. Luong eq. 6 (without softmax) を使用して次の単語を予測する。
  7. 出力と最後の隠れ状態を返す。

入力 :

  • input_step: 入力センテンス・バッチの 1 時間ステップ (1 単語) ; shape=(1, batch_size)
  • last_hidden: GRU の最後の隠れ層 ; shape=(n_layers x num_directions, batch_size, hidden_size)
  • encoder_outputs: エンコーダモデルの出力 ; shape=(max_length, batch_size, hidden_size)

出力 :

  • output: softmax で正規化された tensor で、デコードされたシークエンスで各単語が正しい次の単語である確率を与えます ; shape=(batch_size, voc.num_words)
  • hidden: GRU の最後の隠れ状態 ; shape=(n_layers x num_directions, batch_size, hidden_size)
class LuongAttnDecoderRNN(nn.Module):
    def __init__(self, attn_model, embedding, hidden_size, output_size, n_layers=1, dropout=0.1):
        super(LuongAttnDecoderRNN, self).__init__()

        # Keep for reference
        self.attn_model = attn_model
        self.hidden_size = hidden_size
        self.output_size = output_size
        self.n_layers = n_layers
        self.dropout = dropout

        # Define layers
        self.embedding = embedding
        self.embedding_dropout = nn.Dropout(dropout)
        self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=(0 if n_layers == 1 else dropout))
        self.concat = nn.Linear(hidden_size * 2, hidden_size)
        self.out = nn.Linear(hidden_size, output_size)

        self.attn = Attn(attn_model, hidden_size)

    def forward(self, input_step, last_hidden, encoder_outputs):
        # Note: we run this one step (word) at a time
        # Get embedding of current input word
        embedded = self.embedding(input_step)
        embedded = self.embedding_dropout(embedded)
        # Forward through unidirectional GRU
        rnn_output, hidden = self.gru(embedded, last_hidden)
        # Calculate attention weights from the current GRU output
        attn_weights = self.attn(rnn_output, encoder_outputs)
        # Multiply attention weights to encoder outputs to get new "weighted sum" context vector
        context = attn_weights.bmm(encoder_outputs.transpose(0, 1))
        # Concatenate weighted context vector and GRU output using Luong eq. 5
        rnn_output = rnn_output.squeeze(0)
        context = context.squeeze(1)
        concat_input = torch.cat((rnn_output, context), 1)
        concat_output = torch.tanh(self.concat(concat_input))
        # Predict next word using Luong eq. 6
        output = self.out(concat_output)
        output = F.softmax(output, dim=1)
        # Return output and final hidden state
        return output, hidden

 

訓練手続きを定義する

Masked 損失

パッドされたシークエンスのバッチを処理していますので、損失を計算するとき tensor の総ての要素を単純に考えることはできません。デコーダの出力 tensor、ターゲット tensor、そしてターゲット tensor のパディングを表わす二値マスク tensor に基づく損失を計算するために maskNLLLoss を定義します。この損失関数はマスク tensor の 1 に対応する要素の平均の負の対数尤度 を計算します。

def maskNLLLoss(inp, target, mask):
    nTotal = mask.sum()
    crossEntropy = -torch.log(torch.gather(inp, 1, target.view(-1, 1)).squeeze(1))
    loss = crossEntropy.masked_select(mask).mean()
    loss = loss.to(device)
    return loss, nTotal.item()

 

単一訓練反復

train 関数は単一訓練反復 (入力の単一バッチ) のためのアルゴリズムを含みます。

収束に役立つ 2 つの賢いトリックを使用します :

  • 最初のトリックは teacher forcing を使用します。これは teacher_forcing_ratio により設定されるある確率で、デコーダの次の入力としてデコーダの現在の推測ではなく現在のターゲット単語を使用することを意味します。このテクニックはデコーダのための訓練ハンドル (= wheel) として動作し、より効率的な訓練に役立ちます。けれども、teacher forcing は推論の間モデルの不安定性に繋がる可能性があります、それはデコーダが訓練の間にそれ自身の出力シークエンスを真に作り上げる十分な機会を持たないかもしれないためです。このように、teacher_forcing_ratio をどのように設定しているかに留意しなければなりません、そして高速な収束に騙されてはいけません。
  • 実装する 2 番目のトリックは勾配クリッピングです。これは「勾配爆発」問題に対抗するための一般に使用されるテクニックです。本質的に、勾配を最大値にクリッピングまたは閾値を置くことにより、勾配が指数関数的に増大してオーバーフロー (NaN) やコスト関数で急勾配な崖を行き過ぎることから回避します。

画像ソース: Goodfellow et al. Deep Learning. 2016. https://www.deeplearningbook.org/

演算のシークエンス :

  1. 入力バッチ全体をエンコーダを通して forward パスさせる。
  2. デコーダ入力を SOS_token として、そして隠れ状態をエンコーダの最後の隠れ状態として初期化します。
  3. 入力バッチシークエンスをデコーダを通して一度に 1 時間ステップ forward させる。
  4. If teacher forcing: 次のデコーダ入力を現在のターゲットとして設定する; else: 次のデコーダ入力を現在のデコーダ出力として設定する。
  5. 損失を計算して累積する。
  6. 逆伝播を遂行する。
  7. 勾配をクリップする。
  8. エンコーダとデコーダモデル・パラメータを更新する。

Note : PyTorch の RNN モジュール (RNN, LSTM, GRU) はそれらに入力シークエンス全体 (or シークエンスのバッチ) を単純に渡すことにより任意の他の非リカレント層のように使用できます。エンコーダで GRU 層をこのように使用します。実際には内部的には、隠れ状態を計算する各時間ステップに渡りループする反復プロセスがあります。代わりに、これらのモジュールを一度に 1 時間ステップ実行することができます。この場合、デコーダモデルのために行わなければならないように訓練プロセスの間シークエンスに渡り手動でループします。これらのモジュールの正しい概念のモデルを維持する限りは、シーケンシャルモデルの実装は非常に率直です。

def train(input_variable, lengths, target_variable, mask, max_target_len, encoder, decoder, embedding,
          encoder_optimizer, decoder_optimizer, batch_size, clip, max_length=MAX_LENGTH):

    # Zero gradients
    encoder_optimizer.zero_grad()
    decoder_optimizer.zero_grad()

    # Set device options
    input_variable = input_variable.to(device)
    lengths = lengths.to(device)
    target_variable = target_variable.to(device)
    mask = mask.to(device)

    # Initialize variables
    loss = 0
    print_losses = []
    n_totals = 0

    # Forward pass through encoder
    encoder_outputs, encoder_hidden = encoder(input_variable, lengths)

    # Create initial decoder input (start with SOS tokens for each sentence)
    decoder_input = torch.LongTensor([[SOS_token for _ in range(batch_size)]])
    decoder_input = decoder_input.to(device)

    # Set initial decoder hidden state to the encoder's final hidden state
    decoder_hidden = encoder_hidden[:decoder.n_layers]

    # Determine if we are using teacher forcing this iteration
    use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False

    # Forward batch of sequences through decoder one time step at a time
    if use_teacher_forcing:
        for t in range(max_target_len):
            decoder_output, decoder_hidden = decoder(
                decoder_input, decoder_hidden, encoder_outputs
            )
            # Teacher forcing: next input is current target
            decoder_input = target_variable[t].view(1, -1)
            # Calculate and accumulate loss
            mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t])
            loss += mask_loss
            print_losses.append(mask_loss.item() * nTotal)
            n_totals += nTotal
    else:
        for t in range(max_target_len):
            decoder_output, decoder_hidden = decoder(
                decoder_input, decoder_hidden, encoder_outputs反復
            )
            # No teacher forcing: next input is decoder's own current output
            _, topi = decoder_output.topk(1)
            decoder_input = torch.LongTensor([[topi[i][0] for i in range(batch_size)]])
            decoder_input = decoder_input.to(device)
            # Calculate and accumulate loss
            mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t])
            loss += mask_loss
            print_losses.append(mask_loss.item() * nTotal)
            n_totals += nTotal

    # Perform backpropatation
    loss.backward()

    # Clip gradients: gradients are modified in place
    _ = nn.utils.clip_grad_norm_(encoder.parameters(), clip)
    _ = nn.utils.clip_grad_norm_(decoder.parameters(), clip)

    # Adjust model weights
    encoder_optimizer.step()
    decoder_optimizer.step()

    return sum(print_losses) / n_totals

 

訓練反復

最後に完全な訓練手続きをデータと一緒に結びつける時です。trainIters 関数は渡されたモデル, optimizers, データ等が与えられたとき訓練の n_iterations を実行する責任があります。この関数は極めて自明です、何故ならば train 関数とともに力仕事を行なったからです。注意すべき一つのことはモデルをセーブするとき、エンコーダとデコーダ state_dicts (パラメータ), optimizer の state_dicts, 損失, iteration 等を含む tar ボールセーブすることです。このようにモデルをセーブすることはチェックポイントで究極的な柔軟性を与えてくれます。チェックポイントをロード後、推論を実行するためにモデルパラメータを実行できるでしょう、あるいは訓練をやめたところから直ちに継続できます。

def trainIters(model_name, voc, pairs, encoder, decoder, encoder_optimizer, decoder_optimizer, embedding, encoder_n_layers, decoder_n_layers, save_dir, n_iteration, batch_size, print_every, save_every, clip, corpus_name, loadFilename):

    # Load batches for each iteration
    training_batches = [batch2TrainData(voc, [random.choice(pairs) for _ in range(batch_size)])
                      for _ in range(n_iteration)]

    # Initializations
    print('Initializing ...')
    start_iteration = 1
    print_loss = 0
    if loadFilename:
        start_iteration = checkpoint['iteration'] + 1

    # Training loop
    print("Training...")
    for iteration in range(start_iteration, n_iteration + 1):
        training_batch = training_batches[iteration - 1]
        # Extract fields from batch
        input_variable, lengths, target_variable, mask, max_target_len = training_batch

        # Run a training iteration with batch
        loss = train(input_variable, lengths, target_variable, mask, max_target_len, encoder,
                     decoder, embedding, encoder_optimizer, decoder_optimizer, batch_size, clip)
        print_loss += loss

        # Print progress
        if iteration % print_every == 0:
            print_loss_avg = print_loss / print_every
            print("Iteration: {}; Percent complete: {:.1f}%; Average loss: {:.4f}".format(iteration, iteration / n_iteration * 100, print_loss_avg))
            print_loss = 0

        # Save checkpoint
        if (iteration % save_every == 0):
            directory = os.path.join(save_dir, model_name, corpus_name, '{}-{}_{}'.format(encoder_n_layers, decoder_n_layers, hidden_size))
            if not os.path.exists(directory):
                os.makedirs(directory)
            torch.save({
                'iteration': iteration,
                'en': encoder.state_dict(),
                'de': decoder.state_dict(),
                'en_opt': encoder_optimizer.state_dict(),
                'de_opt': decoder_optimizer.state_dict(),
                'loss': loss,
                'voc_dict': voc.__dict__,
                'embedding': embedding.state_dict()
            }, os.path.join(directory, '{}_{}.tar'.format(iteration, 'checkpoint')))

 

評価を定義する

モデル訓練後、ボットに私達自身でボットと話しができることを望みます。最初に、モデルにエンコードされた入力をどのようにデコードすることを望むかを定義しなければなりません。

 

Greedy デコーディング

Greedy デコーディングは訓練の間に teacher forcing を 使用しない ときに使用するデコーディングの方法です。換言すれば、各時間ステップのために、最も高い softmax 値を持つ decoder_output から単語を単純に選択します。このデコーディング方法は単一の時間ステップレベルで最適です。

greedy デコーディング演算を容易にするために、GreedySearchDecoder クラスを定義します。実行時、このクラスのオブジェクトは shape (input_seq length, 1) の入力シークエンス (input_seq)、スカラー入力長 (input_length) tensor、そして応答センテンス長を抑制するための max_length を取ります。入力センテンスは次の計算グラフを使用して評価されます。

計算グラフ :

  1. 入力をエンコーダを通して foward する。
  2. エンコーダの最後の隠れ層をデコーダへの最初の隠れ入力として準備する。
  3. デコーダの最初の入力を SOS_token として初期化する。
  4. tensor をそれにデコードされた単語を付加するために初期化します。
  5. 反復的に一度に 1 つの単語トークンをデコードします :
    1. デコーダを通した forward パス。
    2. 最尤単語トークンとその softmax スコアを得る。
    3. トークンとスコアを記録する。
    4. 現在のトークンを次のデコーダ入力として準備する。
  6. 単語トークンとスコアのコレクションを返す。
class GreedySearchDecoder(nn.Module):
    def __init__(self, encoder, decoder):
        super(GreedySearchDecoder, self).__init__()
        self.encoder = encoder
        self.decoder = decoder

    def forward(self, input_seq, input_length, max_length):
        # Forward input through encoder model
        encoder_outputs, encoder_hidden = self.encoder(input_seq, input_length)
        # Prepare encoder's final hidden layer to be first hidden input to the decoder
        decoder_hidden = encoder_hidden[:decoder.n_layers]
        # Initialize decoder input with SOS_token
        decoder_input = torch.ones(1, 1, device=device, dtype=torch.long) * SOS_token
        # Initialize tensors to append decoded words to
        all_tokens = torch.zeros([0], device=device, dtype=torch.long)
        all_scores = torch.zeros([0], device=device)
        # Iteratively decode one word token at a time
        for _ in range(max_length):
            # Forward pass through decoder
            decoder_output, decoder_hidden = self.decoder(decoder_input, decoder_hidden, encoder_outputs)
            # Obtain most likely word token and its softmax score
            decoder_scores, decoder_input = torch.max(decoder_output, dim=1)
            # Record token and score
            all_tokens = torch.cat((all_tokens, decoder_input), dim=0)
            all_scores = torch.cat((all_scores, decoder_scores), dim=0)
            # Prepare current token to be next decoder input (add a dimension)
            decoder_input = torch.unsqueeze(decoder_input, 0)
        # Return collections of word tokens and scores
        return all_tokens, all_scores

 

テキストを評価する

デコーディング方法を定義した今、文字列入力シークエンスを評価するための関数を書くことができます。evaluate 関数は入力センテンスを処理する低位プロセスを管理します。最初にセンテンスを batch_size==1 を持つ単語インデックスの入力バッチとしてフォーマットします。センテンスの単語を対応するインデックスに変換し、そしてモデルのための tensor を準備するために次元を transpose することによりこれを行ないます。入力センテンスの長さを含む長さ tensor も作成します。この場合、長さはスカラーです、何故ならば一度に 1 つのセンテンスを評価しているだけだからです (batch_size==1)。次に、GreedySearchDecoder オブジェクト (searcher) を使用してデコードされた応答センテンス tensor を得ます。最後に、応答のインデックスを単語に変換してデコードされた単語のリストを返します。

evaluateInput は私達のチャットボットのためのユーザ・インターフェイスとして機能します。呼び出されたとき、入力テキスト・フィールドが生成され、そこで質問センテンスを入力できます。入力センテンスをタイプして Enter を押した後、テキストは訓練データと同じ方法で正規化されて最終的にデコードされた出力センテンスを得るために evaluate 関数に供給されます。このプロセスをループさせます、そして "q" か "quit" を入力するまでボットとチャットし続けることができます。

最後に、語彙にない単語を含むセンテンスが入力された場合、これをエラーメッセージをプリントしてユーザに他のセンテンスを促すことにより率直に処理します。

def evaluate(encoder, decoder, searcher, voc, sentence, max_length=MAX_LENGTH):
    ### Format input sentence as a batch
    # words -> indexes
    indexes_batch = [indexesFromSentence(voc, sentence)]
    # Create lengths tensor
    lengths = torch.tensor([len(indexes) for indexes in indexes_batch])
    # Transpose dimensions of batch to match models' expectations
    input_batch = torch.LongTensor(indexes_batch).transpose(0, 1)
    # Use appropriate device
    input_batch = input_batch.to(device)
    lengths = lengths.to(device)
    # Decode sentence with searcher
    tokens, scores = searcher(input_batch, lengths, max_length)
    # indexes -> words
    decoded_words = [voc.index2word[token.item()] for token in tokens]
    return decoded_words


def evaluateInput(encoder, decoder, searcher, voc):
    input_sentence = ''
    while(1):
        try:
            # Get input sentence
            input_sentence = input('> ')
            # Check if it is quit case
            if input_sentence == 'q' or input_sentence == 'quit': break
            # Normalize sentence
            input_sentence = normalizeString(input_sentence)
            # Evaluate sentence
            output_words = evaluate(encoder, decoder, searcher, voc, input_sentence)
            # Format and print response sentence
            output_words[:] = [x for x in output_words if not (x == 'EOS' or x == 'PAD')]
            print('Bot:', ' '.join(output_words))

        except KeyError:
            print("Error: Encountered unknown word.")

 

モデルを実行する

最後に、モデルを実行するときです!

チャットボット・モデルを訓練かテストのいずれを望むかにかかわらず、個々のエンコーダとデコーダモデルを初期化しなければなりません。次のブロックでは、望まれる configuration を設定し、スクラッチから始めるかロードするチェックポイントを設定するかを選択し、そしてモデルを構築して初期化します。パフォーマンスを最適化するために異なるモデル configuration で自由に遊んでください。

# Configure models
model_name = 'cb_model'
attn_model = 'dot'
#attn_model = 'general'
#attn_model = 'concat'
hidden_size = 500
encoder_n_layers = 2
decoder_n_layers = 2
dropout = 0.1
batch_size = 64

# Set checkpoint to load from; set to None if starting from scratch
loadFilename = None
checkpoint_iter = 4000
#loadFilename = os.path.join(save_dir, model_name, corpus_name,
#                            '{}-{}_{}'.format(encoder_n_layers, decoder_n_layers, hidden_size),
#                            '{}_checkpoint.tar'.format(checkpoint_iter))


# Load model if a loadFilename is provided
if loadFilename:
    # If loading on same machine the model was trained on
    checkpoint = torch.load(loadFilename)
    # If loading a model trained on GPU to CPU
    #checkpoint = torch.load(loadFilename, map_location=torch.device('cpu'))
    encoder_sd = checkpoint['en']
    decoder_sd = checkpoint['de']
    encoder_optimizer_sd = checkpoint['en_opt']
    decoder_optimizer_sd = checkpoint['de_opt']
    embedding_sd = checkpoint['embedding']
    voc.__dict__ = checkpoint['voc_dict']


print('Building encoder and decoder ...')
# Initialize word embeddings
embedding = nn.Embedding(voc.num_words, hidden_size)
if loadFilename:
    embedding.load_state_dict(embedding_sd)
# Initialize encoder & decoder models
encoder = EncoderRNN(hidden_size, embedding, encoder_n_layers, dropout)
decoder = LuongAttnDecoderRNN(attn_model, embedding, hidden_size, voc.num_words, decoder_n_layers, dropout)
if loadFilename:
    encoder.load_state_dict(encoder_sd)
    decoder.load_state_dict(decoder_sd)
# Use appropriate device
encoder = encoder.to(device)
decoder = decoder.to(device)
print('Models built and ready to go!')
Building encoder and decoder ...
Models built and ready to go!

 

訓練を実行する

モデルを訓練することを望む場合次のブロックを実行してください。

最初に訓練パラメータを設定し、それから optimizer を初期化し、そして最後に訓練反復を実行するために trainIters 関数を呼び出します。

# Configure training/optimization
clip = 50.0
teacher_forcing_ratio = 1.0
learning_rate = 0.0001
decoder_learning_ratio = 5.0
n_iteration = 4000
print_every = 1
save_every = 500

# Ensure dropout layers are in train mode
encoder.train()
decoder.train()

# Initialize optimizers
print('Building optimizers ...')
encoder_optimizer = optim.Adam(encoder.parameters(), lr=learning_rate)
decoder_optimizer = optim.Adam(decoder.parameters(), lr=learning_rate * decoder_learning_ratio)
if loadFilename:
    encoder_optimizer.load_state_dict(encoder_optimizer_sd)
    decoder_optimizer.load_state_dict(decoder_optimizer_sd)

# Run training iterations
print("Starting Training!")
trainIters(model_name, voc, pairs, encoder, decoder, encoder_optimizer, decoder_optimizer,
           embedding, encoder_n_layers, decoder_n_layers, save_dir, n_iteration, batch_size,
           print_every, save_every, clip, corpus_name, loadFilename)
Building optimizers ...
Starting Training!
Initializing ...
Training...
Iteration: 1; Percent complete: 0.0%; Average loss: 8.9700
Iteration: 2; Percent complete: 0.1%; Average loss: 8.8510
Iteration: 3; Percent complete: 0.1%; Average loss: 8.6125
Iteration: 4; Percent complete: 0.1%; Average loss: 8.3319
Iteration: 5; Percent complete: 0.1%; Average loss: 7.8880
Iteration: 6; Percent complete: 0.1%; Average loss: 7.3496
Iteration: 7; Percent complete: 0.2%; Average loss: 6.8200
Iteration: 8; Percent complete: 0.2%; Average loss: 6.8438
Iteration: 9; Percent complete: 0.2%; Average loss: 6.6951
Iteration: 10; Percent complete: 0.2%; Average loss: 6.4766
Iteration: 11; Percent complete: 0.3%; Average loss: 6.2165
Iteration: 12; Percent complete: 0.3%; Average loss: 5.7065
Iteration: 13; Percent complete: 0.3%; Average loss: 5.5910
Iteration: 14; Percent complete: 0.4%; Average loss: 5.7643
Iteration: 15; Percent complete: 0.4%; Average loss: 5.4197
Iteration: 16; Percent complete: 0.4%; Average loss: 5.1872
Iteration: 17; Percent complete: 0.4%; Average loss: 5.1850
Iteration: 18; Percent complete: 0.4%; Average loss: 5.0741
Iteration: 19; Percent complete: 0.5%; Average loss: 4.9552
Iteration: 20; Percent complete: 0.5%; Average loss: 4.9188
Iteration: 21; Percent complete: 0.5%; Average loss: 4.8449
Iteration: 22; Percent complete: 0.5%; Average loss: 5.0830
Iteration: 23; Percent complete: 0.6%; Average loss: 4.6836
Iteration: 24; Percent complete: 0.6%; Average loss: 5.0436
Iteration: 25; Percent complete: 0.6%; Average loss: 4.8137
Iteration: 26; Percent complete: 0.7%; Average loss: 4.8722
Iteration: 27; Percent complete: 0.7%; Average loss: 4.5043
Iteration: 28; Percent complete: 0.7%; Average loss: 4.8190
Iteration: 29; Percent complete: 0.7%; Average loss: 4.9813
Iteration: 30; Percent complete: 0.8%; Average loss: 4.8759
Iteration: 31; Percent complete: 0.8%; Average loss: 4.6561
Iteration: 32; Percent complete: 0.8%; Average loss: 4.6601
Iteration: 33; Percent complete: 0.8%; Average loss: 4.8248
Iteration: 34; Percent complete: 0.9%; Average loss: 4.8463
Iteration: 35; Percent complete: 0.9%; Average loss: 4.6433
Iteration: 36; Percent complete: 0.9%; Average loss: 4.8477
Iteration: 37; Percent complete: 0.9%; Average loss: 4.7450
Iteration: 38; Percent complete: 0.9%; Average loss: 4.8099
Iteration: 39; Percent complete: 1.0%; Average loss: 4.8009
Iteration: 40; Percent complete: 1.0%; Average loss: 4.8867
Iteration: 41; Percent complete: 1.0%; Average loss: 4.5996
Iteration: 42; Percent complete: 1.1%; Average loss: 4.6946
Iteration: 43; Percent complete: 1.1%; Average loss: 4.6344
Iteration: 44; Percent complete: 1.1%; Average loss: 4.5463
Iteration: 45; Percent complete: 1.1%; Average loss: 4.5047
Iteration: 46; Percent complete: 1.1%; Average loss: 4.6456
Iteration: 47; Percent complete: 1.2%; Average loss: 4.8639
Iteration: 48; Percent complete: 1.2%; Average loss: 4.5855
Iteration: 49; Percent complete: 1.2%; Average loss: 4.9177
Iteration: 50; Percent complete: 1.2%; Average loss: 4.7859
Iteration: 51; Percent complete: 1.3%; Average loss: 4.5807
Iteration: 52; Percent complete: 1.3%; Average loss: 4.6704
Iteration: 53; Percent complete: 1.3%; Average loss: 4.5819
Iteration: 54; Percent complete: 1.4%; Average loss: 4.6947
Iteration: 55; Percent complete: 1.4%; Average loss: 4.5120
Iteration: 56; Percent complete: 1.4%; Average loss: 4.5087
Iteration: 57; Percent complete: 1.4%; Average loss: 4.3663
Iteration: 58; Percent complete: 1.5%; Average loss: 4.6250
Iteration: 59; Percent complete: 1.5%; Average loss: 4.3907
Iteration: 60; Percent complete: 1.5%; Average loss: 4.5351
Iteration: 61; Percent complete: 1.5%; Average loss: 4.7792
Iteration: 62; Percent complete: 1.6%; Average loss: 4.5312
Iteration: 63; Percent complete: 1.6%; Average loss: 4.4148
Iteration: 64; Percent complete: 1.6%; Average loss: 4.6137
Iteration: 65; Percent complete: 1.6%; Average loss: 4.5955
Iteration: 66; Percent complete: 1.7%; Average loss: 4.5643
Iteration: 67; Percent complete: 1.7%; Average loss: 4.5415
Iteration: 68; Percent complete: 1.7%; Average loss: 4.2983
Iteration: 69; Percent complete: 1.7%; Average loss: 4.5145
Iteration: 70; Percent complete: 1.8%; Average loss: 4.3834
Iteration: 71; Percent complete: 1.8%; Average loss: 4.3725
Iteration: 72; Percent complete: 1.8%; Average loss: 4.5775
Iteration: 73; Percent complete: 1.8%; Average loss: 4.4024
Iteration: 74; Percent complete: 1.8%; Average loss: 4.7020
Iteration: 75; Percent complete: 1.9%; Average loss: 4.9874
Iteration: 76; Percent complete: 1.9%; Average loss: 4.3946
Iteration: 77; Percent complete: 1.9%; Average loss: 4.6714
Iteration: 78; Percent complete: 1.9%; Average loss: 4.6943
Iteration: 79; Percent complete: 2.0%; Average loss: 4.4128
Iteration: 80; Percent complete: 2.0%; Average loss: 4.3681
Iteration: 81; Percent complete: 2.0%; Average loss: 4.2517
Iteration: 82; Percent complete: 2.1%; Average loss: 4.4193
Iteration: 83; Percent complete: 2.1%; Average loss: 4.6773
Iteration: 84; Percent complete: 2.1%; Average loss: 4.5019
Iteration: 85; Percent complete: 2.1%; Average loss: 4.7879
Iteration: 86; Percent complete: 2.1%; Average loss: 4.3886
Iteration: 87; Percent complete: 2.2%; Average loss: 4.5343
Iteration: 88; Percent complete: 2.2%; Average loss: 4.3677
Iteration: 89; Percent complete: 2.2%; Average loss: 4.2723
Iteration: 90; Percent complete: 2.2%; Average loss: 4.2885
Iteration: 91; Percent complete: 2.3%; Average loss: 4.2639
Iteration: 92; Percent complete: 2.3%; Average loss: 4.5278
Iteration: 93; Percent complete: 2.3%; Average loss: 4.7404
Iteration: 94; Percent complete: 2.4%; Average loss: 4.2921
Iteration: 95; Percent complete: 2.4%; Average loss: 4.3591
Iteration: 96; Percent complete: 2.4%; Average loss: 4.4808
Iteration: 97; Percent complete: 2.4%; Average loss: 4.5840
Iteration: 98; Percent complete: 2.5%; Average loss: 4.2998
Iteration: 99; Percent complete: 2.5%; Average loss: 4.6606
Iteration: 100; Percent complete: 2.5%; Average loss: 4.5687
Iteration: 101; Percent complete: 2.5%; Average loss: 4.2995
Iteration: 102; Percent complete: 2.5%; Average loss: 4.5056
Iteration: 103; Percent complete: 2.6%; Average loss: 4.5341
Iteration: 104; Percent complete: 2.6%; Average loss: 4.5375
Iteration: 105; Percent complete: 2.6%; Average loss: 4.3892
Iteration: 106; Percent complete: 2.6%; Average loss: 4.5028
Iteration: 107; Percent complete: 2.7%; Average loss: 4.1008
Iteration: 108; Percent complete: 2.7%; Average loss: 4.4503
Iteration: 109; Percent complete: 2.7%; Average loss: 4.3562
Iteration: 110; Percent complete: 2.8%; Average loss: 4.4579
Iteration: 111; Percent complete: 2.8%; Average loss: 4.5974
Iteration: 112; Percent complete: 2.8%; Average loss: 4.4283
Iteration: 113; Percent complete: 2.8%; Average loss: 4.2432
Iteration: 114; Percent complete: 2.9%; Average loss: 4.3201
Iteration: 115; Percent complete: 2.9%; Average loss: 4.4858
Iteration: 116; Percent complete: 2.9%; Average loss: 4.2364
Iteration: 117; Percent complete: 2.9%; Average loss: 4.4088
Iteration: 118; Percent complete: 2.9%; Average loss: 4.4584
Iteration: 119; Percent complete: 3.0%; Average loss: 4.2036
Iteration: 120; Percent complete: 3.0%; Average loss: 4.1645
Iteration: 121; Percent complete: 3.0%; Average loss: 4.1304
Iteration: 122; Percent complete: 3.0%; Average loss: 4.2270
Iteration: 123; Percent complete: 3.1%; Average loss: 4.2839
Iteration: 124; Percent complete: 3.1%; Average loss: 4.4085
Iteration: 125; Percent complete: 3.1%; Average loss: 4.1175
Iteration: 126; Percent complete: 3.1%; Average loss: 4.2625
Iteration: 127; Percent complete: 3.2%; Average loss: 4.3785
Iteration: 128; Percent complete: 3.2%; Average loss: 4.3612
Iteration: 129; Percent complete: 3.2%; Average loss: 4.3149
Iteration: 130; Percent complete: 3.2%; Average loss: 4.2489
Iteration: 131; Percent complete: 3.3%; Average loss: 4.2136
Iteration: 132; Percent complete: 3.3%; Average loss: 4.3852
Iteration: 133; Percent complete: 3.3%; Average loss: 4.3666
Iteration: 134; Percent complete: 3.4%; Average loss: 4.3188
Iteration: 135; Percent complete: 3.4%; Average loss: 4.1183
Iteration: 136; Percent complete: 3.4%; Average loss: 4.6119
Iteration: 137; Percent complete: 3.4%; Average loss: 4.3853
Iteration: 138; Percent complete: 3.5%; Average loss: 4.1454
Iteration: 139; Percent complete: 3.5%; Average loss: 3.9775
Iteration: 140; Percent complete: 3.5%; Average loss: 4.2327
Iteration: 141; Percent complete: 3.5%; Average loss: 4.2128
Iteration: 142; Percent complete: 3.5%; Average loss: 4.1233
Iteration: 143; Percent complete: 3.6%; Average loss: 4.2684
Iteration: 144; Percent complete: 3.6%; Average loss: 4.2873
Iteration: 145; Percent complete: 3.6%; Average loss: 4.2775
Iteration: 146; Percent complete: 3.6%; Average loss: 4.3609
Iteration: 147; Percent complete: 3.7%; Average loss: 4.2381
Iteration: 148; Percent complete: 3.7%; Average loss: 4.3620
Iteration: 149; Percent complete: 3.7%; Average loss: 4.2392
Iteration: 150; Percent complete: 3.8%; Average loss: 4.3360
Iteration: 151; Percent complete: 3.8%; Average loss: 4.2176
Iteration: 152; Percent complete: 3.8%; Average loss: 4.2107
Iteration: 153; Percent complete: 3.8%; Average loss: 4.1726
Iteration: 154; Percent complete: 3.9%; Average loss: 4.2341
Iteration: 155; Percent complete: 3.9%; Average loss: 4.3721
Iteration: 156; Percent complete: 3.9%; Average loss: 4.3260
Iteration: 157; Percent complete: 3.9%; Average loss: 4.1872
Iteration: 158; Percent complete: 4.0%; Average loss: 4.1366
Iteration: 159; Percent complete: 4.0%; Average loss: 4.4480
Iteration: 160; Percent complete: 4.0%; Average loss: 4.3218
Iteration: 161; Percent complete: 4.0%; Average loss: 4.2441
Iteration: 162; Percent complete: 4.0%; Average loss: 4.4961
Iteration: 163; Percent complete: 4.1%; Average loss: 4.0517
Iteration: 164; Percent complete: 4.1%; Average loss: 4.3145
Iteration: 165; Percent complete: 4.1%; Average loss: 4.1128
Iteration: 166; Percent complete: 4.2%; Average loss: 4.1953
Iteration: 167; Percent complete: 4.2%; Average loss: 4.2139
Iteration: 168; Percent complete: 4.2%; Average loss: 4.3889
Iteration: 169; Percent complete: 4.2%; Average loss: 4.2688
Iteration: 170; Percent complete: 4.2%; Average loss: 4.3338
Iteration: 171; Percent complete: 4.3%; Average loss: 4.0004
Iteration: 172; Percent complete: 4.3%; Average loss: 4.1113
Iteration: 173; Percent complete: 4.3%; Average loss: 4.2388
Iteration: 174; Percent complete: 4.3%; Average loss: 3.9562
Iteration: 175; Percent complete: 4.4%; Average loss: 4.2686
Iteration: 176; Percent complete: 4.4%; Average loss: 4.3226
Iteration: 177; Percent complete: 4.4%; Average loss: 4.2844
Iteration: 178; Percent complete: 4.5%; Average loss: 4.1881
Iteration: 179; Percent complete: 4.5%; Average loss: 4.1640
Iteration: 180; Percent complete: 4.5%; Average loss: 4.1756
Iteration: 181; Percent complete: 4.5%; Average loss: 4.0146
Iteration: 182; Percent complete: 4.5%; Average loss: 4.1844
Iteration: 183; Percent complete: 4.6%; Average loss: 4.2895
Iteration: 184; Percent complete: 4.6%; Average loss: 3.9900
Iteration: 185; Percent complete: 4.6%; Average loss: 3.9624
Iteration: 186; Percent complete: 4.7%; Average loss: 4.2113
Iteration: 187; Percent complete: 4.7%; Average loss: 4.2456
Iteration: 188; Percent complete: 4.7%; Average loss: 4.4317
Iteration: 189; Percent complete: 4.7%; Average loss: 4.0182
Iteration: 190; Percent complete: 4.8%; Average loss: 3.7759
Iteration: 191; Percent complete: 4.8%; Average loss: 4.0856
Iteration: 192; Percent complete: 4.8%; Average loss: 4.2861
Iteration: 193; Percent complete: 4.8%; Average loss: 4.0885
Iteration: 194; Percent complete: 4.9%; Average loss: 4.0340
Iteration: 195; Percent complete: 4.9%; Average loss: 4.2206
Iteration: 196; Percent complete: 4.9%; Average loss: 3.8888
Iteration: 197; Percent complete: 4.9%; Average loss: 4.2600
Iteration: 198; Percent complete: 5.0%; Average loss: 4.0809
Iteration: 199; Percent complete: 5.0%; Average loss: 3.8768
Iteration: 200; Percent complete: 5.0%; Average loss: 3.9250
Iteration: 201; Percent complete: 5.0%; Average loss: 3.8987
Iteration: 202; Percent complete: 5.1%; Average loss: 4.1153
Iteration: 203; Percent complete: 5.1%; Average loss: 3.9342
Iteration: 204; Percent complete: 5.1%; Average loss: 4.1364
Iteration: 205; Percent complete: 5.1%; Average loss: 4.1015
Iteration: 206; Percent complete: 5.1%; Average loss: 4.2670
Iteration: 207; Percent complete: 5.2%; Average loss: 4.1249
Iteration: 208; Percent complete: 5.2%; Average loss: 4.1458
Iteration: 209; Percent complete: 5.2%; Average loss: 4.2433
Iteration: 210; Percent complete: 5.2%; Average loss: 4.0574
Iteration: 211; Percent complete: 5.3%; Average loss: 4.2140
Iteration: 212; Percent complete: 5.3%; Average loss: 3.8877
Iteration: 213; Percent complete: 5.3%; Average loss: 3.8573
Iteration: 214; Percent complete: 5.3%; Average loss: 4.1922
Iteration: 215; Percent complete: 5.4%; Average loss: 3.9642
Iteration: 216; Percent complete: 5.4%; Average loss: 4.3454
Iteration: 217; Percent complete: 5.4%; Average loss: 3.8712
Iteration: 218; Percent complete: 5.5%; Average loss: 4.0237
Iteration: 219; Percent complete: 5.5%; Average loss: 3.8518
Iteration: 220; Percent complete: 5.5%; Average loss: 4.0537
Iteration: 221; Percent complete: 5.5%; Average loss: 4.2142
Iteration: 222; Percent complete: 5.5%; Average loss: 3.9949
Iteration: 223; Percent complete: 5.6%; Average loss: 4.2131
Iteration: 224; Percent complete: 5.6%; Average loss: 4.0020
Iteration: 225; Percent complete: 5.6%; Average loss: 4.0625
Iteration: 226; Percent complete: 5.7%; Average loss: 4.2754
Iteration: 227; Percent complete: 5.7%; Average loss: 4.1134
Iteration: 228; Percent complete: 5.7%; Average loss: 4.2379
Iteration: 229; Percent complete: 5.7%; Average loss: 4.0155
Iteration: 230; Percent complete: 5.8%; Average loss: 4.1059
Iteration: 231; Percent complete: 5.8%; Average loss: 3.9244
Iteration: 232; Percent complete: 5.8%; Average loss: 3.9393
Iteration: 233; Percent complete: 5.8%; Average loss: 4.2402
Iteration: 234; Percent complete: 5.9%; Average loss: 3.9486
Iteration: 235; Percent complete: 5.9%; Average loss: 3.9806
Iteration: 236; Percent complete: 5.9%; Average loss: 3.9762
Iteration: 237; Percent complete: 5.9%; Average loss: 3.9739
Iteration: 238; Percent complete: 5.9%; Average loss: 3.8886
Iteration: 239; Percent complete: 6.0%; Average loss: 3.9077
Iteration: 240; Percent complete: 6.0%; Average loss: 4.1070
Iteration: 241; Percent complete: 6.0%; Average loss: 3.8079
Iteration: 242; Percent complete: 6.0%; Average loss: 3.7881
Iteration: 243; Percent complete: 6.1%; Average loss: 3.7308
Iteration: 244; Percent complete: 6.1%; Average loss: 4.2105
Iteration: 245; Percent complete: 6.1%; Average loss: 4.0443
Iteration: 246; Percent complete: 6.2%; Average loss: 4.0769
Iteration: 247; Percent complete: 6.2%; Average loss: 3.9649
Iteration: 248; Percent complete: 6.2%; Average loss: 4.0710
Iteration: 249; Percent complete: 6.2%; Average loss: 4.1007
Iteration: 250; Percent complete: 6.2%; Average loss: 3.9031
Iteration: 251; Percent complete: 6.3%; Average loss: 3.9633
Iteration: 252; Percent complete: 6.3%; Average loss: 3.8330
Iteration: 253; Percent complete: 6.3%; Average loss: 3.8499
Iteration: 254; Percent complete: 6.3%; Average loss: 4.0574
Iteration: 255; Percent complete: 6.4%; Average loss: 4.0578
Iteration: 256; Percent complete: 6.4%; Average loss: 3.7530
Iteration: 257; Percent complete: 6.4%; Average loss: 3.8102
Iteration: 258; Percent complete: 6.5%; Average loss: 3.8882
Iteration: 259; Percent complete: 6.5%; Average loss: 3.8559
Iteration: 260; Percent complete: 6.5%; Average loss: 4.0348
Iteration: 261; Percent complete: 6.5%; Average loss: 3.8406
Iteration: 262; Percent complete: 6.6%; Average loss: 3.9232
Iteration: 263; Percent complete: 6.6%; Average loss: 3.5536
Iteration: 264; Percent complete: 6.6%; Average loss: 3.9799
Iteration: 265; Percent complete: 6.6%; Average loss: 3.8437
Iteration: 266; Percent complete: 6.7%; Average loss: 4.0686
Iteration: 267; Percent complete: 6.7%; Average loss: 4.0261
Iteration: 268; Percent complete: 6.7%; Average loss: 3.9393
Iteration: 269; Percent complete: 6.7%; Average loss: 4.1778
Iteration: 270; Percent complete: 6.8%; Average loss: 3.9962
Iteration: 271; Percent complete: 6.8%; Average loss: 4.2440
Iteration: 272; Percent complete: 6.8%; Average loss: 3.8694
Iteration: 273; Percent complete: 6.8%; Average loss: 3.9737
Iteration: 274; Percent complete: 6.9%; Average loss: 3.9939
Iteration: 275; Percent complete: 6.9%; Average loss: 3.8562
Iteration: 276; Percent complete: 6.9%; Average loss: 3.7607
Iteration: 277; Percent complete: 6.9%; Average loss: 4.0315
Iteration: 278; Percent complete: 7.0%; Average loss: 3.9953
Iteration: 279; Percent complete: 7.0%; Average loss: 3.7730
Iteration: 280; Percent complete: 7.0%; Average loss: 3.8810
Iteration: 281; Percent complete: 7.0%; Average loss: 3.8155
Iteration: 282; Percent complete: 7.0%; Average loss: 3.7949
Iteration: 283; Percent complete: 7.1%; Average loss: 4.0129
Iteration: 284; Percent complete: 7.1%; Average loss: 3.9393
Iteration: 285; Percent complete: 7.1%; Average loss: 3.9112
Iteration: 286; Percent complete: 7.1%; Average loss: 4.2953
Iteration: 287; Percent complete: 7.2%; Average loss: 3.9629
Iteration: 288; Percent complete: 7.2%; Average loss: 4.0430
Iteration: 289; Percent complete: 7.2%; Average loss: 3.7229
Iteration: 290; Percent complete: 7.2%; Average loss: 3.6630
Iteration: 291; Percent complete: 7.3%; Average loss: 4.1357
Iteration: 292; Percent complete: 7.3%; Average loss: 3.6125
Iteration: 293; Percent complete: 7.3%; Average loss: 4.0443
Iteration: 294; Percent complete: 7.3%; Average loss: 3.8308
Iteration: 295; Percent complete: 7.4%; Average loss: 3.8286
Iteration: 296; Percent complete: 7.4%; Average loss: 3.7475
Iteration: 297; Percent complete: 7.4%; Average loss: 4.0241
Iteration: 298; Percent complete: 7.4%; Average loss: 3.8660
Iteration: 299; Percent complete: 7.5%; Average loss: 4.0096
Iteration: 300; Percent complete: 7.5%; Average loss: 3.7442
Iteration: 301; Percent complete: 7.5%; Average loss: 3.8135
Iteration: 302; Percent complete: 7.5%; Average loss: 3.9159
Iteration: 303; Percent complete: 7.6%; Average loss: 3.9905
Iteration: 304; Percent complete: 7.6%; Average loss: 4.1652
Iteration: 305; Percent complete: 7.6%; Average loss: 3.8820
Iteration: 306; Percent complete: 7.6%; Average loss: 4.0256
Iteration: 307; Percent complete: 7.7%; Average loss: 3.7905
Iteration: 308; Percent complete: 7.7%; Average loss: 3.6512
Iteration: 309; Percent complete: 7.7%; Average loss: 3.8253
Iteration: 310; Percent complete: 7.8%; Average loss: 3.7619
Iteration: 311; Percent complete: 7.8%; Average loss: 3.8004
Iteration: 312; Percent complete: 7.8%; Average loss: 3.9631
Iteration: 313; Percent complete: 7.8%; Average loss: 3.8306
Iteration: 314; Percent complete: 7.8%; Average loss: 3.6368
Iteration: 315; Percent complete: 7.9%; Average loss: 3.7152
Iteration: 316; Percent complete: 7.9%; Average loss: 4.1167
Iteration: 317; Percent complete: 7.9%; Average loss: 4.0216
Iteration: 318; Percent complete: 8.0%; Average loss: 3.8053
Iteration: 319; Percent complete: 8.0%; Average loss: 3.9569
Iteration: 320; Percent complete: 8.0%; Average loss: 3.8670
Iteration: 321; Percent complete: 8.0%; Average loss: 4.1313
Iteration: 322; Percent complete: 8.1%; Average loss: 3.9546
Iteration: 323; Percent complete: 8.1%; Average loss: 3.8761
Iteration: 324; Percent complete: 8.1%; Average loss: 3.7466
Iteration: 325; Percent complete: 8.1%; Average loss: 3.9596
Iteration: 326; Percent complete: 8.2%; Average loss: 4.0572
Iteration: 327; Percent complete: 8.2%; Average loss: 3.7388
Iteration: 328; Percent complete: 8.2%; Average loss: 3.8359
Iteration: 329; Percent complete: 8.2%; Average loss: 4.0262
Iteration: 330; Percent complete: 8.2%; Average loss: 3.9726
Iteration: 331; Percent complete: 8.3%; Average loss: 3.7785
Iteration: 332; Percent complete: 8.3%; Average loss: 3.9266
Iteration: 333; Percent complete: 8.3%; Average loss: 3.7340
Iteration: 334; Percent complete: 8.3%; Average loss: 3.9022
Iteration: 335; Percent complete: 8.4%; Average loss: 4.1685
Iteration: 336; Percent complete: 8.4%; Average loss: 3.6582
Iteration: 337; Percent complete: 8.4%; Average loss: 3.3540
Iteration: 338; Percent complete: 8.5%; Average loss: 4.1542
Iteration: 339; Percent complete: 8.5%; Average loss: 4.0613
Iteration: 340; Percent complete: 8.5%; Average loss: 3.8305
Iteration: 341; Percent complete: 8.5%; Average loss: 4.1509
Iteration: 342; Percent complete: 8.6%; Average loss: 3.7823
Iteration: 343; Percent complete: 8.6%; Average loss: 4.0556
Iteration: 344; Percent complete: 8.6%; Average loss: 4.3185
Iteration: 345; Percent complete: 8.6%; Average loss: 3.7630
Iteration: 346; Percent complete: 8.6%; Average loss: 3.8995
Iteration: 347; Percent complete: 8.7%; Average loss: 3.9550
Iteration: 348; Percent complete: 8.7%; Average loss: 3.8181
Iteration: 349; Percent complete: 8.7%; Average loss: 3.8723
Iteration: 350; Percent complete: 8.8%; Average loss: 3.7024
Iteration: 351; Percent complete: 8.8%; Average loss: 3.7938
Iteration: 352; Percent complete: 8.8%; Average loss: 3.9292
Iteration: 353; Percent complete: 8.8%; Average loss: 3.6309
Iteration: 354; Percent complete: 8.8%; Average loss: 3.6424
Iteration: 355; Percent complete: 8.9%; Average loss: 3.5195
Iteration: 356; Percent complete: 8.9%; Average loss: 4.0658
Iteration: 357; Percent complete: 8.9%; Average loss: 3.8018
Iteration: 358; Percent complete: 8.9%; Average loss: 3.7596
Iteration: 359; Percent complete: 9.0%; Average loss: 3.8795
Iteration: 360; Percent complete: 9.0%; Average loss: 3.8875
Iteration: 361; Percent complete: 9.0%; Average loss: 4.0665
Iteration: 362; Percent complete: 9.0%; Average loss: 3.7955
Iteration: 363; Percent complete: 9.1%; Average loss: 3.9961
Iteration: 364; Percent complete: 9.1%; Average loss: 3.7833
Iteration: 365; Percent complete: 9.1%; Average loss: 3.8459
Iteration: 366; Percent complete: 9.2%; Average loss: 3.7406
Iteration: 367; Percent complete: 9.2%; Average loss: 3.8552
Iteration: 368; Percent complete: 9.2%; Average loss: 4.0255
Iteration: 369; Percent complete: 9.2%; Average loss: 3.9054
Iteration: 370; Percent complete: 9.2%; Average loss: 3.8640
Iteration: 371; Percent complete: 9.3%; Average loss: 3.9815
Iteration: 372; Percent complete: 9.3%; Average loss: 3.6711
Iteration: 373; Percent complete: 9.3%; Average loss: 3.5774
Iteration: 374; Percent complete: 9.3%; Average loss: 3.8988
Iteration: 375; Percent complete: 9.4%; Average loss: 3.6702
Iteration: 376; Percent complete: 9.4%; Average loss: 3.7317
Iteration: 377; Percent complete: 9.4%; Average loss: 3.7711
Iteration: 378; Percent complete: 9.4%; Average loss: 3.7720
Iteration: 379; Percent complete: 9.5%; Average loss: 3.9081
Iteration: 380; Percent complete: 9.5%; Average loss: 3.9314
Iteration: 381; Percent complete: 9.5%; Average loss: 3.8434
Iteration: 382; Percent complete: 9.6%; Average loss: 4.0779
Iteration: 383; Percent complete: 9.6%; Average loss: 3.8102
Iteration: 384; Percent complete: 9.6%; Average loss: 3.7998
Iteration: 385; Percent complete: 9.6%; Average loss: 3.7051
Iteration: 386; Percent complete: 9.7%; Average loss: 3.6798
Iteration: 387; Percent complete: 9.7%; Average loss: 3.7164
Iteration: 388; Percent complete: 9.7%; Average loss: 3.7804
Iteration: 389; Percent complete: 9.7%; Average loss: 3.9782
Iteration: 390; Percent complete: 9.8%; Average loss: 3.6553
Iteration: 391; Percent complete: 9.8%; Average loss: 3.4807
Iteration: 392; Percent complete: 9.8%; Average loss: 3.7770
Iteration: 393; Percent complete: 9.8%; Average loss: 3.7657
Iteration: 394; Percent complete: 9.8%; Average loss: 3.8298
Iteration: 395; Percent complete: 9.9%; Average loss: 3.8905
Iteration: 396; Percent complete: 9.9%; Average loss: 3.8507
Iteration: 397; Percent complete: 9.9%; Average loss: 3.8403
Iteration: 398; Percent complete: 10.0%; Average loss: 3.6440
Iteration: 399; Percent complete: 10.0%; Average loss: 3.8997
Iteration: 400; Percent complete: 10.0%; Average loss: 3.7494
Iteration: 401; Percent complete: 10.0%; Average loss: 3.8100
Iteration: 402; Percent complete: 10.1%; Average loss: 3.8567
Iteration: 403; Percent complete: 10.1%; Average loss: 3.8611
Iteration: 404; Percent complete: 10.1%; Average loss: 3.9675
Iteration: 405; Percent complete: 10.1%; Average loss: 3.5882
Iteration: 406; Percent complete: 10.2%; Average loss: 3.6233
Iteration: 407; Percent complete: 10.2%; Average loss: 4.0500
Iteration: 408; Percent complete: 10.2%; Average loss: 3.8549
Iteration: 409; Percent complete: 10.2%; Average loss: 3.8154
Iteration: 410; Percent complete: 10.2%; Average loss: 4.0185
Iteration: 411; Percent complete: 10.3%; Average loss: 3.8470
Iteration: 412; Percent complete: 10.3%; Average loss: 3.5470
Iteration: 413; Percent complete: 10.3%; Average loss: 3.8618
Iteration: 414; Percent complete: 10.3%; Average loss: 3.6207
Iteration: 415; Percent complete: 10.4%; Average loss: 3.7447
Iteration: 416; Percent complete: 10.4%; Average loss: 3.8059
Iteration: 417; Percent complete: 10.4%; Average loss: 3.6328
Iteration: 418; Percent complete: 10.4%; Average loss: 3.9340
Iteration: 419; Percent complete: 10.5%; Average loss: 3.4518
Iteration: 420; Percent complete: 10.5%; Average loss: 3.7046
Iteration: 421; Percent complete: 10.5%; Average loss: 3.6300
Iteration: 422; Percent complete: 10.5%; Average loss: 3.8149
Iteration: 423; Percent complete: 10.6%; Average loss: 3.7812
Iteration: 424; Percent complete: 10.6%; Average loss: 3.9664
Iteration: 425; Percent complete: 10.6%; Average loss: 3.7314
Iteration: 426; Percent complete: 10.7%; Average loss: 3.7925
Iteration: 427; Percent complete: 10.7%; Average loss: 3.6559
Iteration: 428; Percent complete: 10.7%; Average loss: 4.0024
Iteration: 429; Percent complete: 10.7%; Average loss: 3.5138
Iteration: 430; Percent complete: 10.8%; Average loss: 3.5619
Iteration: 431; Percent complete: 10.8%; Average loss: 3.9069
Iteration: 432; Percent complete: 10.8%; Average loss: 3.7957
Iteration: 433; Percent complete: 10.8%; Average loss: 3.5811
Iteration: 434; Percent complete: 10.8%; Average loss: 3.9546
Iteration: 435; Percent complete: 10.9%; Average loss: 3.8340
Iteration: 436; Percent complete: 10.9%; Average loss: 3.9049
Iteration: 437; Percent complete: 10.9%; Average loss: 3.7374
Iteration: 438; Percent complete: 10.9%; Average loss: 3.9640
Iteration: 439; Percent complete: 11.0%; Average loss: 3.7117
Iteration: 440; Percent complete: 11.0%; Average loss: 4.1720
Iteration: 441; Percent complete: 11.0%; Average loss: 3.7337
Iteration: 442; Percent complete: 11.1%; Average loss: 3.8388
Iteration: 443; Percent complete: 11.1%; Average loss: 3.9101
Iteration: 444; Percent complete: 11.1%; Average loss: 3.8175
Iteration: 445; Percent complete: 11.1%; Average loss: 3.7405
Iteration: 446; Percent complete: 11.2%; Average loss: 3.5572
Iteration: 447; Percent complete: 11.2%; Average loss: 3.7945
Iteration: 448; Percent complete: 11.2%; Average loss: 3.6710
Iteration: 449; Percent complete: 11.2%; Average loss: 3.7951
Iteration: 450; Percent complete: 11.2%; Average loss: 3.4623
Iteration: 451; Percent complete: 11.3%; Average loss: 3.6962
Iteration: 452; Percent complete: 11.3%; Average loss: 3.7401
Iteration: 453; Percent complete: 11.3%; Average loss: 3.6331
Iteration: 454; Percent complete: 11.3%; Average loss: 3.8527
Iteration: 455; Percent complete: 11.4%; Average loss: 3.8341
Iteration: 456; Percent complete: 11.4%; Average loss: 3.8430
Iteration: 457; Percent complete: 11.4%; Average loss: 3.5917
Iteration: 458; Percent complete: 11.5%; Average loss: 3.7264
Iteration: 459; Percent complete: 11.5%; Average loss: 3.5205
Iteration: 460; Percent complete: 11.5%; Average loss: 3.6834
Iteration: 461; Percent complete: 11.5%; Average loss: 3.8634
Iteration: 462; Percent complete: 11.6%; Average loss: 3.8123
Iteration: 463; Percent complete: 11.6%; Average loss: 3.6960
Iteration: 464; Percent complete: 11.6%; Average loss: 3.6785
Iteration: 465; Percent complete: 11.6%; Average loss: 3.7995
Iteration: 466; Percent complete: 11.7%; Average loss: 3.8850
Iteration: 467; Percent complete: 11.7%; Average loss: 3.8932
Iteration: 468; Percent complete: 11.7%; Average loss: 3.7855
Iteration: 469; Percent complete: 11.7%; Average loss: 3.7544
Iteration: 470; Percent complete: 11.8%; Average loss: 3.6478
Iteration: 471; Percent complete: 11.8%; Average loss: 3.8241
Iteration: 472; Percent complete: 11.8%; Average loss: 3.7294
Iteration: 473; Percent complete: 11.8%; Average loss: 3.8456
Iteration: 474; Percent complete: 11.8%; Average loss: 3.6006
Iteration: 475; Percent complete: 11.9%; Average loss: 3.7221
Iteration: 476; Percent complete: 11.9%; Average loss: 3.5453
Iteration: 477; Percent complete: 11.9%; Average loss: 3.9270
Iteration: 478; Percent complete: 11.9%; Average loss: 3.5625
Iteration: 479; Percent complete: 12.0%; Average loss: 3.7414
Iteration: 480; Percent complete: 12.0%; Average loss: 3.6768
Iteration: 481; Percent complete: 12.0%; Average loss: 3.7685
Iteration: 482; Percent complete: 12.0%; Average loss: 3.7571
Iteration: 483; Percent complete: 12.1%; Average loss: 3.8020
Iteration: 484; Percent complete: 12.1%; Average loss: 3.7843
Iteration: 485; Percent complete: 12.1%; Average loss: 3.7877
Iteration: 486; Percent complete: 12.2%; Average loss: 3.7315
Iteration: 487; Percent complete: 12.2%; Average loss: 3.8302
Iteration: 488; Percent complete: 12.2%; Average loss: 3.5811
Iteration: 489; Percent complete: 12.2%; Average loss: 3.8118
Iteration: 490; Percent complete: 12.2%; Average loss: 3.5098
Iteration: 491; Percent complete: 12.3%; Average loss: 3.9214
Iteration: 492; Percent complete: 12.3%; Average loss: 3.6370
Iteration: 493; Percent complete: 12.3%; Average loss: 3.8595
Iteration: 494; Percent complete: 12.3%; Average loss: 3.7254
Iteration: 495; Percent complete: 12.4%; Average loss: 3.4986
Iteration: 496; Percent complete: 12.4%; Average loss: 3.6083
Iteration: 497; Percent complete: 12.4%; Average loss: 3.6335
Iteration: 498; Percent complete: 12.4%; Average loss: 3.6884
Iteration: 499; Percent complete: 12.5%; Average loss: 4.0773
Iteration: 500; Percent complete: 12.5%; Average loss: 3.7478
Iteration: 501; Percent complete: 12.5%; Average loss: 3.8107
Iteration: 502; Percent complete: 12.6%; Average loss: 3.7359
Iteration: 503; Percent complete: 12.6%; Average loss: 3.9864
Iteration: 504; Percent complete: 12.6%; Average loss: 3.5014
Iteration: 505; Percent complete: 12.6%; Average loss: 3.6453
Iteration: 506; Percent complete: 12.7%; Average loss: 3.7674
Iteration: 507; Percent complete: 12.7%; Average loss: 3.7574
Iteration: 508; Percent complete: 12.7%; Average loss: 3.6990
Iteration: 509; Percent complete: 12.7%; Average loss: 3.9186
Iteration: 510; Percent complete: 12.8%; Average loss: 3.6265
Iteration: 511; Percent complete: 12.8%; Average loss: 3.7865
Iteration: 512; Percent complete: 12.8%; Average loss: 3.7743
Iteration: 513; Percent complete: 12.8%; Average loss: 3.9047
Iteration: 514; Percent complete: 12.8%; Average loss: 3.6243
Iteration: 515; Percent complete: 12.9%; Average loss: 3.8131
Iteration: 516; Percent complete: 12.9%; Average loss: 3.9192
Iteration: 517; Percent complete: 12.9%; Average loss: 3.9225
Iteration: 518; Percent complete: 13.0%; Average loss: 3.5610
Iteration: 519; Percent complete: 13.0%; Average loss: 3.6553
Iteration: 520; Percent complete: 13.0%; Average loss: 3.6668
Iteration: 521; Percent complete: 13.0%; Average loss: 3.6546
Iteration: 522; Percent complete: 13.1%; Average loss: 3.5540
Iteration: 523; Percent complete: 13.1%; Average loss: 3.7695
Iteration: 524; Percent complete: 13.1%; Average loss: 3.7110
Iteration: 525; Percent complete: 13.1%; Average loss: 3.8481
Iteration: 526; Percent complete: 13.2%; Average loss: 3.5886
Iteration: 527; Percent complete: 13.2%; Average loss: 3.6159
Iteration: 528; Percent complete: 13.2%; Average loss: 4.0567
Iteration: 529; Percent complete: 13.2%; Average loss: 3.5679
Iteration: 530; Percent complete: 13.2%; Average loss: 3.6101
Iteration: 531; Percent complete: 13.3%; Average loss: 3.6313
Iteration: 532; Percent complete: 13.3%; Average loss: 3.5940
Iteration: 533; Percent complete: 13.3%; Average loss: 3.6249
Iteration: 534; Percent complete: 13.4%; Average loss: 3.4456
Iteration: 535; Percent complete: 13.4%; Average loss: 3.3703
Iteration: 536; Percent complete: 13.4%; Average loss: 3.7733
Iteration: 537; Percent complete: 13.4%; Average loss: 3.5682
Iteration: 538; Percent complete: 13.5%; Average loss: 3.5885
Iteration: 539; Percent complete: 13.5%; Average loss: 3.6478
Iteration: 540; Percent complete: 13.5%; Average loss: 3.6166
Iteration: 541; Percent complete: 13.5%; Average loss: 3.6276
Iteration: 542; Percent complete: 13.6%; Average loss: 3.6240
Iteration: 543; Percent complete: 13.6%; Average loss: 3.8882
Iteration: 544; Percent complete: 13.6%; Average loss: 3.7211
Iteration: 545; Percent complete: 13.6%; Average loss: 3.9566
Iteration: 546; Percent complete: 13.7%; Average loss: 3.8454
Iteration: 547; Percent complete: 13.7%; Average loss: 3.8862
Iteration: 548; Percent complete: 13.7%; Average loss: 3.6287
Iteration: 549; Percent complete: 13.7%; Average loss: 3.7825
Iteration: 550; Percent complete: 13.8%; Average loss: 3.9501
Iteration: 551; Percent complete: 13.8%; Average loss: 3.7419
Iteration: 552; Percent complete: 13.8%; Average loss: 3.6717
Iteration: 553; Percent complete: 13.8%; Average loss: 3.6089
Iteration: 554; Percent complete: 13.9%; Average loss: 3.6275
Iteration: 555; Percent complete: 13.9%; Average loss: 3.8244
Iteration: 556; Percent complete: 13.9%; Average loss: 3.8908
Iteration: 557; Percent complete: 13.9%; Average loss: 3.5472
Iteration: 558; Percent complete: 14.0%; Average loss: 3.6208
Iteration: 559; Percent complete: 14.0%; Average loss: 3.6158
Iteration: 560; Percent complete: 14.0%; Average loss: 3.7802
Iteration: 561; Percent complete: 14.0%; Average loss: 3.6415
Iteration: 562; Percent complete: 14.1%; Average loss: 3.5270
Iteration: 563; Percent complete: 14.1%; Average loss: 3.6744
Iteration: 564; Percent complete: 14.1%; Average loss: 3.7359
Iteration: 565; Percent complete: 14.1%; Average loss: 3.5647
Iteration: 566; Percent complete: 14.1%; Average loss: 3.6676
Iteration: 567; Percent complete: 14.2%; Average loss: 3.6328
Iteration: 568; Percent complete: 14.2%; Average loss: 3.5852
Iteration: 569; Percent complete: 14.2%; Average loss: 3.7258
Iteration: 570; Percent complete: 14.2%; Average loss: 3.4700
Iteration: 571; Percent complete: 14.3%; Average loss: 3.7474
Iteration: 572; Percent complete: 14.3%; Average loss: 3.7386
Iteration: 573; Percent complete: 14.3%; Average loss: 3.5689
Iteration: 574; Percent complete: 14.3%; Average loss: 3.5057
Iteration: 575; Percent complete: 14.4%; Average loss: 3.6879
Iteration: 576; Percent complete: 14.4%; Average loss: 3.9881
Iteration: 577; Percent complete: 14.4%; Average loss: 3.7762
Iteration: 578; Percent complete: 14.4%; Average loss: 3.5596
Iteration: 579; Percent complete: 14.5%; Average loss: 3.6432
Iteration: 580; Percent complete: 14.5%; Average loss: 3.7095
Iteration: 581; Percent complete: 14.5%; Average loss: 3.4974
Iteration: 582; Percent complete: 14.5%; Average loss: 3.6194
Iteration: 583; Percent complete: 14.6%; Average loss: 3.6745
Iteration: 584; Percent complete: 14.6%; Average loss: 3.5472
Iteration: 585; Percent complete: 14.6%; Average loss: 3.7347
Iteration: 586; Percent complete: 14.6%; Average loss: 3.7196
Iteration: 587; Percent complete: 14.7%; Average loss: 3.5192
Iteration: 588; Percent complete: 14.7%; Average loss: 3.7240
Iteration: 589; Percent complete: 14.7%; Average loss: 3.6266
Iteration: 590; Percent complete: 14.8%; Average loss: 3.5615
Iteration: 591; Percent complete: 14.8%; Average loss: 3.5696
Iteration: 592; Percent complete: 14.8%; Average loss: 3.5449
Iteration: 593; Percent complete: 14.8%; Average loss: 3.7277
Iteration: 594; Percent complete: 14.8%; Average loss: 3.6401
Iteration: 595; Percent complete: 14.9%; Average loss: 3.5490
Iteration: 596; Percent complete: 14.9%; Average loss: 3.7947
Iteration: 597; Percent complete: 14.9%; Average loss: 3.7022
Iteration: 598; Percent complete: 14.9%; Average loss: 3.7214
Iteration: 599; Percent complete: 15.0%; Average loss: 3.8626
Iteration: 600; Percent complete: 15.0%; Average loss: 3.8217
Iteration: 601; Percent complete: 15.0%; Average loss: 3.7147
Iteration: 602; Percent complete: 15.0%; Average loss: 3.6444
Iteration: 603; Percent complete: 15.1%; Average loss: 3.4678
Iteration: 604; Percent complete: 15.1%; Average loss: 3.9698
Iteration: 605; Percent complete: 15.1%; Average loss: 3.9470
Iteration: 606; Percent complete: 15.2%; Average loss: 3.6821
Iteration: 607; Percent complete: 15.2%; Average loss: 3.5171
Iteration: 608; Percent complete: 15.2%; Average loss: 3.4749
Iteration: 609; Percent complete: 15.2%; Average loss: 3.4585
Iteration: 610; Percent complete: 15.2%; Average loss: 3.7682
Iteration: 611; Percent complete: 15.3%; Average loss: 3.6267
Iteration: 612; Percent complete: 15.3%; Average loss: 3.7376
Iteration: 613; Percent complete: 15.3%; Average loss: 3.4054
Iteration: 614; Percent complete: 15.3%; Average loss: 3.6097
Iteration: 615; Percent complete: 15.4%; Average loss: 3.7161
Iteration: 616; Percent complete: 15.4%; Average loss: 3.8115
Iteration: 617; Percent complete: 15.4%; Average loss: 3.7294
Iteration: 618; Percent complete: 15.4%; Average loss: 3.7705
Iteration: 619; Percent complete: 15.5%; Average loss: 3.7477
Iteration: 620; Percent complete: 15.5%; Average loss: 3.4717
Iteration: 621; Percent complete: 15.5%; Average loss: 3.9391
Iteration: 622; Percent complete: 15.6%; Average loss: 3.2658
Iteration: 623; Percent complete: 15.6%; Average loss: 3.6944
Iteration: 624; Percent complete: 15.6%; Average loss: 3.7210
Iteration: 625; Percent complete: 15.6%; Average loss: 3.7747
Iteration: 626; Percent complete: 15.7%; Average loss: 3.7241
Iteration: 627; Percent complete: 15.7%; Average loss: 3.6737
Iteration: 628; Percent complete: 15.7%; Average loss: 3.4522
Iteration: 629; Percent complete: 15.7%; Average loss: 3.6676
Iteration: 630; Percent complete: 15.8%; Average loss: 3.6356
Iteration: 631; Percent complete: 15.8%; Average loss: 3.8671
Iteration: 632; Percent complete: 15.8%; Average loss: 3.6026
Iteration: 633; Percent complete: 15.8%; Average loss: 3.8175
Iteration: 634; Percent complete: 15.8%; Average loss: 3.7781
Iteration: 635; Percent complete: 15.9%; Average loss: 4.0696
Iteration: 636; Percent complete: 15.9%; Average loss: 3.3310
Iteration: 637; Percent complete: 15.9%; Average loss: 3.5977
Iteration: 638; Percent complete: 16.0%; Average loss: 3.7075
Iteration: 639; Percent complete: 16.0%; Average loss: 3.7428
Iteration: 640; Percent complete: 16.0%; Average loss: 3.8172
Iteration: 641; Percent complete: 16.0%; Average loss: 3.3586
Iteration: 642; Percent complete: 16.1%; Average loss: 3.8349
Iteration: 643; Percent complete: 16.1%; Average loss: 3.5543
Iteration: 644; Percent complete: 16.1%; Average loss: 3.5500
Iteration: 645; Percent complete: 16.1%; Average loss: 3.8910
Iteration: 646; Percent complete: 16.2%; Average loss: 3.7265
Iteration: 647; Percent complete: 16.2%; Average loss: 3.8222
Iteration: 648; Percent complete: 16.2%; Average loss: 3.8266
Iteration: 649; Percent complete: 16.2%; Average loss: 3.5912
Iteration: 650; Percent complete: 16.2%; Average loss: 3.7357
Iteration: 651; Percent complete: 16.3%; Average loss: 3.5992
Iteration: 652; Percent complete: 16.3%; Average loss: 3.5254
Iteration: 653; Percent complete: 16.3%; Average loss: 3.6114
Iteration: 654; Percent complete: 16.4%; Average loss: 3.8420
Iteration: 655; Percent complete: 16.4%; Average loss: 3.8382
Iteration: 656; Percent complete: 16.4%; Average loss: 3.6502
Iteration: 657; Percent complete: 16.4%; Average loss: 3.3361
Iteration: 658; Percent complete: 16.4%; Average loss: 3.6589
Iteration: 659; Percent complete: 16.5%; Average loss: 3.4994
Iteration: 660; Percent complete: 16.5%; Average loss: 3.7002
Iteration: 661; Percent complete: 16.5%; Average loss: 3.5259
Iteration: 662; Percent complete: 16.6%; Average loss: 3.6158
Iteration: 663; Percent complete: 16.6%; Average loss: 3.6276
Iteration: 664; Percent complete: 16.6%; Average loss: 3.6294
Iteration: 665; Percent complete: 16.6%; Average loss: 4.0257
Iteration: 666; Percent complete: 16.7%; Average loss: 3.4359
Iteration: 667; Percent complete: 16.7%; Average loss: 4.0017
Iteration: 668; Percent complete: 16.7%; Average loss: 3.4498
Iteration: 669; Percent complete: 16.7%; Average loss: 3.6413
Iteration: 670; Percent complete: 16.8%; Average loss: 3.7153
Iteration: 671; Percent complete: 16.8%; Average loss: 3.5649
Iteration: 672; Percent complete: 16.8%; Average loss: 3.2042
Iteration: 673; Percent complete: 16.8%; Average loss: 3.6180
Iteration: 674; Percent complete: 16.9%; Average loss: 3.7181
Iteration: 675; Percent complete: 16.9%; Average loss: 3.7030
Iteration: 676; Percent complete: 16.9%; Average loss: 3.5175
Iteration: 677; Percent complete: 16.9%; Average loss: 3.5124
Iteration: 678; Percent complete: 17.0%; Average loss: 3.7266
Iteration: 679; Percent complete: 17.0%; Average loss: 3.6403
Iteration: 680; Percent complete: 17.0%; Average loss: 3.5616
Iteration: 681; Percent complete: 17.0%; Average loss: 3.5636
Iteration: 682; Percent complete: 17.1%; Average loss: 3.6778
Iteration: 683; Percent complete: 17.1%; Average loss: 3.8465
Iteration: 684; Percent complete: 17.1%; Average loss: 3.6176
Iteration: 685; Percent complete: 17.1%; Average loss: 3.4979
Iteration: 686; Percent complete: 17.2%; Average loss: 3.6173
Iteration: 687; Percent complete: 17.2%; Average loss: 3.6397
Iteration: 688; Percent complete: 17.2%; Average loss: 3.6163
Iteration: 689; Percent complete: 17.2%; Average loss: 3.6699
Iteration: 690; Percent complete: 17.2%; Average loss: 3.4702
Iteration: 691; Percent complete: 17.3%; Average loss: 3.5293
Iteration: 692; Percent complete: 17.3%; Average loss: 3.6893
Iteration: 693; Percent complete: 17.3%; Average loss: 3.7453
Iteration: 694; Percent complete: 17.3%; Average loss: 3.4728
Iteration: 695; Percent complete: 17.4%; Average loss: 3.5916
Iteration: 696; Percent complete: 17.4%; Average loss: 3.4288
Iteration: 697; Percent complete: 17.4%; Average loss: 3.6858
Iteration: 698; Percent complete: 17.4%; Average loss: 3.4827
Iteration: 699; Percent complete: 17.5%; Average loss: 3.6672
Iteration: 700; Percent complete: 17.5%; Average loss: 3.7087
Iteration: 701; Percent complete: 17.5%; Average loss: 3.7319
Iteration: 702; Percent complete: 17.5%; Average loss: 3.6839
Iteration: 703; Percent complete: 17.6%; Average loss: 3.9569
Iteration: 704; Percent complete: 17.6%; Average loss: 3.6692
Iteration: 705; Percent complete: 17.6%; Average loss: 3.5669
Iteration: 706; Percent complete: 17.6%; Average loss: 3.7878
Iteration: 707; Percent complete: 17.7%; Average loss: 3.7833
Iteration: 708; Percent complete: 17.7%; Average loss: 3.6311
Iteration: 709; Percent complete: 17.7%; Average loss: 3.4738
Iteration: 710; Percent complete: 17.8%; Average loss: 3.5917
Iteration: 711; Percent complete: 17.8%; Average loss: 3.7858
Iteration: 712; Percent complete: 17.8%; Average loss: 3.9772
Iteration: 713; Percent complete: 17.8%; Average loss: 3.8247
Iteration: 714; Percent complete: 17.8%; Average loss: 3.3509
Iteration: 715; Percent complete: 17.9%; Average loss: 3.5679
Iteration: 716; Percent complete: 17.9%; Average loss: 3.8249
Iteration: 717; Percent complete: 17.9%; Average loss: 3.8008
Iteration: 718; Percent complete: 17.9%; Average loss: 3.7512
Iteration: 719; Percent complete: 18.0%; Average loss: 3.5317
Iteration: 720; Percent complete: 18.0%; Average loss: 3.5849
Iteration: 721; Percent complete: 18.0%; Average loss: 3.5493
Iteration: 722; Percent complete: 18.1%; Average loss: 3.5022
Iteration: 723; Percent complete: 18.1%; Average loss: 3.4994
Iteration: 724; Percent complete: 18.1%; Average loss: 3.7937
Iteration: 725; Percent complete: 18.1%; Average loss: 3.3939
Iteration: 726; Percent complete: 18.1%; Average loss: 3.7173
Iteration: 727; Percent complete: 18.2%; Average loss: 3.3727
Iteration: 728; Percent complete: 18.2%; Average loss: 3.4259
Iteration: 729; Percent complete: 18.2%; Average loss: 3.6798
Iteration: 730; Percent complete: 18.2%; Average loss: 3.6941
Iteration: 731; Percent complete: 18.3%; Average loss: 3.6316
Iteration: 732; Percent complete: 18.3%; Average loss: 3.4120
Iteration: 733; Percent complete: 18.3%; Average loss: 3.6835
Iteration: 734; Percent complete: 18.4%; Average loss: 3.5878
Iteration: 735; Percent complete: 18.4%; Average loss: 3.5521
Iteration: 736; Percent complete: 18.4%; Average loss: 3.7905
Iteration: 737; Percent complete: 18.4%; Average loss: 3.5377
Iteration: 738; Percent complete: 18.4%; Average loss: 3.4542
Iteration: 739; Percent complete: 18.5%; Average loss: 3.8104
Iteration: 740; Percent complete: 18.5%; Average loss: 3.3739
Iteration: 741; Percent complete: 18.5%; Average loss: 3.6806
Iteration: 742; Percent complete: 18.6%; Average loss: 3.4700
Iteration: 743; Percent complete: 18.6%; Average loss: 3.6663
Iteration: 744; Percent complete: 18.6%; Average loss: 3.6564
Iteration: 745; Percent complete: 18.6%; Average loss: 3.7333
Iteration: 746; Percent complete: 18.6%; Average loss: 3.3743
Iteration: 747; Percent complete: 18.7%; Average loss: 3.6580
Iteration: 748; Percent complete: 18.7%; Average loss: 3.2829
Iteration: 749; Percent complete: 18.7%; Average loss: 3.6148
Iteration: 750; Percent complete: 18.8%; Average loss: 3.3503
Iteration: 751; Percent complete: 18.8%; Average loss: 3.5672
Iteration: 752; Percent complete: 18.8%; Average loss: 3.4417
Iteration: 753; Percent complete: 18.8%; Average loss: 3.6048
Iteration: 754; Percent complete: 18.9%; Average loss: 3.7451
Iteration: 755; Percent complete: 18.9%; Average loss: 3.5889
Iteration: 756; Percent complete: 18.9%; Average loss: 3.5798
Iteration: 757; Percent complete: 18.9%; Average loss: 3.3740
Iteration: 758; Percent complete: 18.9%; Average loss: 3.4675
Iteration: 759; Percent complete: 19.0%; Average loss: 3.6149
Iteration: 760; Percent complete: 19.0%; Average loss: 3.5936
Iteration: 761; Percent complete: 19.0%; Average loss: 3.5313
Iteration: 762; Percent complete: 19.1%; Average loss: 3.6652
Iteration: 763; Percent complete: 19.1%; Average loss: 3.5584
Iteration: 764; Percent complete: 19.1%; Average loss: 3.7265
Iteration: 765; Percent complete: 19.1%; Average loss: 3.5500
Iteration: 766; Percent complete: 19.1%; Average loss: 3.6049
Iteration: 767; Percent complete: 19.2%; Average loss: 3.5194
Iteration: 768; Percent complete: 19.2%; Average loss: 3.7463
Iteration: 769; Percent complete: 19.2%; Average loss: 3.5932
Iteration: 770; Percent complete: 19.2%; Average loss: 3.7828
Iteration: 771; Percent complete: 19.3%; Average loss: 3.5657
Iteration: 772; Percent complete: 19.3%; Average loss: 3.7906
Iteration: 773; Percent complete: 19.3%; Average loss: 3.3428
Iteration: 774; Percent complete: 19.4%; Average loss: 3.6353
Iteration: 775; Percent complete: 19.4%; Average loss: 3.4990
Iteration: 776; Percent complete: 19.4%; Average loss: 3.7521
Iteration: 777; Percent complete: 19.4%; Average loss: 3.4661
Iteration: 778; Percent complete: 19.4%; Average loss: 3.2584
Iteration: 779; Percent complete: 19.5%; Average loss: 3.4047
Iteration: 780; Percent complete: 19.5%; Average loss: 3.5483
Iteration: 781; Percent complete: 19.5%; Average loss: 3.8009
Iteration: 782; Percent complete: 19.6%; Average loss: 3.6386
Iteration: 783; Percent complete: 19.6%; Average loss: 3.5252
Iteration: 784; Percent complete: 19.6%; Average loss: 3.7724
Iteration: 785; Percent complete: 19.6%; Average loss: 3.4558
Iteration: 786; Percent complete: 19.7%; Average loss: 3.6618
Iteration: 787; Percent complete: 19.7%; Average loss: 3.5760
Iteration: 788; Percent complete: 19.7%; Average loss: 3.6717
Iteration: 789; Percent complete: 19.7%; Average loss: 3.4891
Iteration: 790; Percent complete: 19.8%; Average loss: 3.7829
Iteration: 791; Percent complete: 19.8%; Average loss: 3.7346
Iteration: 792; Percent complete: 19.8%; Average loss: 3.4341
Iteration: 793; Percent complete: 19.8%; Average loss: 3.4940
Iteration: 794; Percent complete: 19.9%; Average loss: 3.2923
Iteration: 795; Percent complete: 19.9%; Average loss: 3.9275
Iteration: 796; Percent complete: 19.9%; Average loss: 3.8697
Iteration: 797; Percent complete: 19.9%; Average loss: 3.9818
Iteration: 798; Percent complete: 20.0%; Average loss: 3.6499
Iteration: 799; Percent complete: 20.0%; Average loss: 3.4632
Iteration: 800; Percent complete: 20.0%; Average loss: 3.4319
Iteration: 801; Percent complete: 20.0%; Average loss: 3.4915
Iteration: 802; Percent complete: 20.1%; Average loss: 3.5053
Iteration: 803; Percent complete: 20.1%; Average loss: 3.5753
Iteration: 804; Percent complete: 20.1%; Average loss: 3.3738
Iteration: 805; Percent complete: 20.1%; Average loss: 3.7679
Iteration: 806; Percent complete: 20.2%; Average loss: 3.4763
Iteration: 807; Percent complete: 20.2%; Average loss: 3.7702
Iteration: 808; Percent complete: 20.2%; Average loss: 3.6004
Iteration: 809; Percent complete: 20.2%; Average loss: 3.5174
Iteration: 810; Percent complete: 20.2%; Average loss: 3.4261
Iteration: 811; Percent complete: 20.3%; Average loss: 3.6263
Iteration: 812; Percent complete: 20.3%; Average loss: 3.4460
Iteration: 813; Percent complete: 20.3%; Average loss: 3.1238
Iteration: 814; Percent complete: 20.3%; Average loss: 3.8302
Iteration: 815; Percent complete: 20.4%; Average loss: 3.6602
Iteration: 816; Percent complete: 20.4%; Average loss: 3.4283
Iteration: 817; Percent complete: 20.4%; Average loss: 3.1774
Iteration: 818; Percent complete: 20.4%; Average loss: 3.3391
Iteration: 819; Percent complete: 20.5%; Average loss: 3.6300
Iteration: 820; Percent complete: 20.5%; Average loss: 3.4515
Iteration: 821; Percent complete: 20.5%; Average loss: 3.9049
Iteration: 822; Percent complete: 20.5%; Average loss: 3.7661
Iteration: 823; Percent complete: 20.6%; Average loss: 3.7405
Iteration: 824; Percent complete: 20.6%; Average loss: 3.7097
Iteration: 825; Percent complete: 20.6%; Average loss: 3.5696
Iteration: 826; Percent complete: 20.6%; Average loss: 3.6514
Iteration: 827; Percent complete: 20.7%; Average loss: 3.6685
Iteration: 828; Percent complete: 20.7%; Average loss: 3.3546
Iteration: 829; Percent complete: 20.7%; Average loss: 3.5454
Iteration: 830; Percent complete: 20.8%; Average loss: 3.2708
Iteration: 831; Percent complete: 20.8%; Average loss: 3.4788
Iteration: 832; Percent complete: 20.8%; Average loss: 3.5330
Iteration: 833; Percent complete: 20.8%; Average loss: 3.4911
Iteration: 834; Percent complete: 20.8%; Average loss: 3.6741
Iteration: 835; Percent complete: 20.9%; Average loss: 3.2773
Iteration: 836; Percent complete: 20.9%; Average loss: 3.8069
Iteration: 837; Percent complete: 20.9%; Average loss: 3.5730
Iteration: 838; Percent complete: 20.9%; Average loss: 3.5479
Iteration: 839; Percent complete: 21.0%; Average loss: 3.5963
Iteration: 840; Percent complete: 21.0%; Average loss: 3.3429
Iteration: 841; Percent complete: 21.0%; Average loss: 3.6503
Iteration: 842; Percent complete: 21.1%; Average loss: 3.6143
Iteration: 843; Percent complete: 21.1%; Average loss: 3.5322
Iteration: 844; Percent complete: 21.1%; Average loss: 3.4805
Iteration: 845; Percent complete: 21.1%; Average loss: 3.7687
Iteration: 846; Percent complete: 21.1%; Average loss: 3.3699
Iteration: 847; Percent complete: 21.2%; Average loss: 3.1696
Iteration: 848; Percent complete: 21.2%; Average loss: 3.6961
Iteration: 849; Percent complete: 21.2%; Average loss: 3.4922
Iteration: 850; Percent complete: 21.2%; Average loss: 3.5280
Iteration: 851; Percent complete: 21.3%; Average loss: 3.5914
Iteration: 852; Percent complete: 21.3%; Average loss: 3.4138
Iteration: 853; Percent complete: 21.3%; Average loss: 3.4667
Iteration: 854; Percent complete: 21.3%; Average loss: 3.5764
Iteration: 855; Percent complete: 21.4%; Average loss: 3.6647
Iteration: 856; Percent complete: 21.4%; Average loss: 3.4723
Iteration: 857; Percent complete: 21.4%; Average loss: 3.4039
Iteration: 858; Percent complete: 21.4%; Average loss: 3.4585
Iteration: 859; Percent complete: 21.5%; Average loss: 3.4053
Iteration: 860; Percent complete: 21.5%; Average loss: 3.3563
Iteration: 861; Percent complete: 21.5%; Average loss: 3.4924
Iteration: 862; Percent complete: 21.6%; Average loss: 3.4322
Iteration: 863; Percent complete: 21.6%; Average loss: 3.7765
Iteration: 864; Percent complete: 21.6%; Average loss: 3.6390
Iteration: 865; Percent complete: 21.6%; Average loss: 3.5175
Iteration: 866; Percent complete: 21.6%; Average loss: 3.4097
Iteration: 867; Percent complete: 21.7%; Average loss: 3.7658
Iteration: 868; Percent complete: 21.7%; Average loss: 3.6425
Iteration: 869; Percent complete: 21.7%; Average loss: 3.4776
Iteration: 870; Percent complete: 21.8%; Average loss: 3.7818
Iteration: 871; Percent complete: 21.8%; Average loss: 3.6087
Iteration: 872; Percent complete: 21.8%; Average loss: 3.3046
Iteration: 873; Percent complete: 21.8%; Average loss: 3.8012
Iteration: 874; Percent complete: 21.9%; Average loss: 3.4258
Iteration: 875; Percent complete: 21.9%; Average loss: 3.5895
Iteration: 876; Percent complete: 21.9%; Average loss: 3.6561
Iteration: 877; Percent complete: 21.9%; Average loss: 3.6952
Iteration: 878; Percent complete: 21.9%; Average loss: 3.2878
Iteration: 879; Percent complete: 22.0%; Average loss: 3.1412
Iteration: 880; Percent complete: 22.0%; Average loss: 3.5888
Iteration: 881; Percent complete: 22.0%; Average loss: 3.5023
Iteration: 882; Percent complete: 22.1%; Average loss: 3.7786
Iteration: 883; Percent complete: 22.1%; Average loss: 3.5899
Iteration: 884; Percent complete: 22.1%; Average loss: 3.4941
Iteration: 885; Percent complete: 22.1%; Average loss: 3.4198
Iteration: 886; Percent complete: 22.1%; Average loss: 3.5284
Iteration: 887; Percent complete: 22.2%; Average loss: 3.7301
Iteration: 888; Percent complete: 22.2%; Average loss: 3.6178
Iteration: 889; Percent complete: 22.2%; Average loss: 3.3171
Iteration: 890; Percent complete: 22.2%; Average loss: 3.2002
Iteration: 891; Percent complete: 22.3%; Average loss: 3.7357
Iteration: 892; Percent complete: 22.3%; Average loss: 3.6440
Iteration: 893; Percent complete: 22.3%; Average loss: 3.3519
Iteration: 894; Percent complete: 22.4%; Average loss: 3.5578
Iteration: 895; Percent complete: 22.4%; Average loss: 3.1437
Iteration: 896; Percent complete: 22.4%; Average loss: 3.5193
Iteration: 897; Percent complete: 22.4%; Average loss: 3.4350
Iteration: 898; Percent complete: 22.4%; Average loss: 3.6274
Iteration: 899; Percent complete: 22.5%; Average loss: 3.2796
Iteration: 900; Percent complete: 22.5%; Average loss: 3.4592
Iteration: 901; Percent complete: 22.5%; Average loss: 3.8135
Iteration: 902; Percent complete: 22.6%; Average loss: 3.5513
Iteration: 903; Percent complete: 22.6%; Average loss: 3.4361
Iteration: 904; Percent complete: 22.6%; Average loss: 3.2885
Iteration: 905; Percent complete: 22.6%; Average loss: 3.6075
Iteration: 906; Percent complete: 22.7%; Average loss: 3.4108
Iteration: 907; Percent complete: 22.7%; Average loss: 3.4307
Iteration: 908; Percent complete: 22.7%; Average loss: 3.5935
Iteration: 909; Percent complete: 22.7%; Average loss: 3.6944
Iteration: 910; Percent complete: 22.8%; Average loss: 3.4079
Iteration: 911; Percent complete: 22.8%; Average loss: 3.5800
Iteration: 912; Percent complete: 22.8%; Average loss: 3.5584
Iteration: 913; Percent complete: 22.8%; Average loss: 3.3393
Iteration: 914; Percent complete: 22.9%; Average loss: 3.4322
Iteration: 915; Percent complete: 22.9%; Average loss: 3.6274
Iteration: 916; Percent complete: 22.9%; Average loss: 3.4609
Iteration: 917; Percent complete: 22.9%; Average loss: 3.2408
Iteration: 918; Percent complete: 22.9%; Average loss: 3.5875
Iteration: 919; Percent complete: 23.0%; Average loss: 3.7425
Iteration: 920; Percent complete: 23.0%; Average loss: 3.4865
Iteration: 921; Percent complete: 23.0%; Average loss: 3.9158
Iteration: 922; Percent complete: 23.1%; Average loss: 3.6273
Iteration: 923; Percent complete: 23.1%; Average loss: 3.4026
Iteration: 924; Percent complete: 23.1%; Average loss: 3.3530
Iteration: 925; Percent complete: 23.1%; Average loss: 3.5638
Iteration: 926; Percent complete: 23.2%; Average loss: 3.2307
Iteration: 927; Percent complete: 23.2%; Average loss: 3.5063
Iteration: 928; Percent complete: 23.2%; Average loss: 3.7098
Iteration: 929; Percent complete: 23.2%; Average loss: 3.4988
Iteration: 930; Percent complete: 23.2%; Average loss: 3.2661
Iteration: 931; Percent complete: 23.3%; Average loss: 3.4975
Iteration: 932; Percent complete: 23.3%; Average loss: 3.5500
Iteration: 933; Percent complete: 23.3%; Average loss: 3.5197
Iteration: 934; Percent complete: 23.4%; Average loss: 3.5469
Iteration: 935; Percent complete: 23.4%; Average loss: 3.3586
Iteration: 936; Percent complete: 23.4%; Average loss: 3.6767
Iteration: 937; Percent complete: 23.4%; Average loss: 3.6014
Iteration: 938; Percent complete: 23.4%; Average loss: 3.7782
Iteration: 939; Percent complete: 23.5%; Average loss: 3.3084
Iteration: 940; Percent complete: 23.5%; Average loss: 3.4524
Iteration: 941; Percent complete: 23.5%; Average loss: 3.7790
Iteration: 942; Percent complete: 23.5%; Average loss: 3.3530
Iteration: 943; Percent complete: 23.6%; Average loss: 3.4617
Iteration: 944; Percent complete: 23.6%; Average loss: 3.2788
Iteration: 945; Percent complete: 23.6%; Average loss: 3.7062
Iteration: 946; Percent complete: 23.6%; Average loss: 3.3104
Iteration: 947; Percent complete: 23.7%; Average loss: 3.5001
Iteration: 948; Percent complete: 23.7%; Average loss: 3.6347
Iteration: 949; Percent complete: 23.7%; Average loss: 3.4977
Iteration: 950; Percent complete: 23.8%; Average loss: 3.2186
Iteration: 951; Percent complete: 23.8%; Average loss: 3.5751
Iteration: 952; Percent complete: 23.8%; Average loss: 3.5213
Iteration: 953; Percent complete: 23.8%; Average loss: 3.4798
Iteration: 954; Percent complete: 23.8%; Average loss: 3.3908
Iteration: 955; Percent complete: 23.9%; Average loss: 3.4748
Iteration: 956; Percent complete: 23.9%; Average loss: 3.3893
Iteration: 957; Percent complete: 23.9%; Average loss: 3.6419
Iteration: 958; Percent complete: 23.9%; Average loss: 3.3937
Iteration: 959; Percent complete: 24.0%; Average loss: 3.4465
Iteration: 960; Percent complete: 24.0%; Average loss: 3.6125
Iteration: 961; Percent complete: 24.0%; Average loss: 3.5785
Iteration: 962; Percent complete: 24.1%; Average loss: 3.4534
Iteration: 963; Percent complete: 24.1%; Average loss: 3.3712
Iteration: 964; Percent complete: 24.1%; Average loss: 3.5364
Iteration: 965; Percent complete: 24.1%; Average loss: 3.3211
Iteration: 966; Percent complete: 24.1%; Average loss: 3.4846
Iteration: 967; Percent complete: 24.2%; Average loss: 3.8268
Iteration: 968; Percent complete: 24.2%; Average loss: 3.5259
Iteration: 969; Percent complete: 24.2%; Average loss: 3.5004
Iteration: 970; Percent complete: 24.2%; Average loss: 3.4907
Iteration: 971; Percent complete: 24.3%; Average loss: 3.5476
Iteration: 972; Percent complete: 24.3%; Average loss: 3.3797
Iteration: 973; Percent complete: 24.3%; Average loss: 3.2900
Iteration: 974; Percent complete: 24.3%; Average loss: 3.3800
Iteration: 975; Percent complete: 24.4%; Average loss: 3.5922
Iteration: 976; Percent complete: 24.4%; Average loss: 3.6576
Iteration: 977; Percent complete: 24.4%; Average loss: 3.4183
Iteration: 978; Percent complete: 24.4%; Average loss: 3.5073
Iteration: 979; Percent complete: 24.5%; Average loss: 3.7786
Iteration: 980; Percent complete: 24.5%; Average loss: 3.6999
Iteration: 981; Percent complete: 24.5%; Average loss: 3.6344
Iteration: 982; Percent complete: 24.6%; Average loss: 3.3921
Iteration: 983; Percent complete: 24.6%; Average loss: 3.4432
Iteration: 984; Percent complete: 24.6%; Average loss: 3.3590
Iteration: 985; Percent complete: 24.6%; Average loss: 3.4301
Iteration: 986; Percent complete: 24.6%; Average loss: 3.3776
Iteration: 987; Percent complete: 24.7%; Average loss: 3.2749
Iteration: 988; Percent complete: 24.7%; Average loss: 3.3413
Iteration: 989; Percent complete: 24.7%; Average loss: 3.6104
Iteration: 990; Percent complete: 24.8%; Average loss: 3.5319
Iteration: 991; Percent complete: 24.8%; Average loss: 3.8374
Iteration: 992; Percent complete: 24.8%; Average loss: 3.5099
Iteration: 993; Percent complete: 24.8%; Average loss: 3.5003
Iteration: 994; Percent complete: 24.9%; Average loss: 3.5039
Iteration: 995; Percent complete: 24.9%; Average loss: 3.2617
Iteration: 996; Percent complete: 24.9%; Average loss: 3.3440
Iteration: 997; Percent complete: 24.9%; Average loss: 3.4407
Iteration: 998; Percent complete: 24.9%; Average loss: 3.7589
Iteration: 999; Percent complete: 25.0%; Average loss: 3.3017
Iteration: 1000; Percent complete: 25.0%; Average loss: 3.2986
Iteration: 1001; Percent complete: 25.0%; Average loss: 3.7113
Iteration: 1002; Percent complete: 25.1%; Average loss: 3.3493
Iteration: 1003; Percent complete: 25.1%; Average loss: 3.4085
Iteration: 1004; Percent complete: 25.1%; Average loss: 3.4868
Iteration: 1005; Percent complete: 25.1%; Average loss: 3.6340
Iteration: 1006; Percent complete: 25.1%; Average loss: 3.3969
Iteration: 1007; Percent complete: 25.2%; Average loss: 3.3688
Iteration: 1008; Percent complete: 25.2%; Average loss: 3.4576
Iteration: 1009; Percent complete: 25.2%; Average loss: 3.4724
Iteration: 1010; Percent complete: 25.2%; Average loss: 3.3993
Iteration: 1011; Percent complete: 25.3%; Average loss: 3.4996
Iteration: 1012; Percent complete: 25.3%; Average loss: 3.5063
Iteration: 1013; Percent complete: 25.3%; Average loss: 3.3639
Iteration: 1014; Percent complete: 25.4%; Average loss: 3.4577
Iteration: 1015; Percent complete: 25.4%; Average loss: 3.4139
Iteration: 1016; Percent complete: 25.4%; Average loss: 3.4455
Iteration: 1017; Percent complete: 25.4%; Average loss: 3.2968
Iteration: 1018; Percent complete: 25.4%; Average loss: 3.6203
Iteration: 1019; Percent complete: 25.5%; Average loss: 3.5772
Iteration: 1020; Percent complete: 25.5%; Average loss: 3.3508
Iteration: 1021; Percent complete: 25.5%; Average loss: 3.6650
Iteration: 1022; Percent complete: 25.6%; Average loss: 3.5362
Iteration: 1023; Percent complete: 25.6%; Average loss: 3.3856
Iteration: 1024; Percent complete: 25.6%; Average loss: 3.3205
Iteration: 1025; Percent complete: 25.6%; Average loss: 3.7494
Iteration: 1026; Percent complete: 25.7%; Average loss: 3.6776
Iteration: 1027; Percent complete: 25.7%; Average loss: 3.6199
Iteration: 1028; Percent complete: 25.7%; Average loss: 3.3957
Iteration: 1029; Percent complete: 25.7%; Average loss: 3.5690
Iteration: 1030; Percent complete: 25.8%; Average loss: 3.4110
Iteration: 1031; Percent complete: 25.8%; Average loss: 3.5122
Iteration: 1032; Percent complete: 25.8%; Average loss: 3.6170
Iteration: 1033; Percent complete: 25.8%; Average loss: 3.6075
Iteration: 1034; Percent complete: 25.9%; Average loss: 3.5973
Iteration: 1035; Percent complete: 25.9%; Average loss: 3.6152
Iteration: 1036; Percent complete: 25.9%; Average loss: 3.2950
Iteration: 1037; Percent complete: 25.9%; Average loss: 3.6452
Iteration: 1038; Percent complete: 25.9%; Average loss: 3.1738
Iteration: 1039; Percent complete: 26.0%; Average loss: 3.4615
Iteration: 1040; Percent complete: 26.0%; Average loss: 3.4608
Iteration: 1041; Percent complete: 26.0%; Average loss: 3.4886
Iteration: 1042; Percent complete: 26.1%; Average loss: 3.4307
Iteration: 1043; Percent complete: 26.1%; Average loss: 3.4928
Iteration: 1044; Percent complete: 26.1%; Average loss: 3.2097
Iteration: 1045; Percent complete: 26.1%; Average loss: 3.4712
Iteration: 1046; Percent complete: 26.2%; Average loss: 3.4891
Iteration: 1047; Percent complete: 26.2%; Average loss: 3.4679
Iteration: 1048; Percent complete: 26.2%; Average loss: 3.4566
Iteration: 1049; Percent complete: 26.2%; Average loss: 3.3165
Iteration: 1050; Percent complete: 26.2%; Average loss: 3.3166
Iteration: 1051; Percent complete: 26.3%; Average loss: 3.5116
Iteration: 1052; Percent complete: 26.3%; Average loss: 3.3814
Iteration: 1053; Percent complete: 26.3%; Average loss: 3.3848
Iteration: 1054; Percent complete: 26.4%; Average loss: 3.5774
Iteration: 1055; Percent complete: 26.4%; Average loss: 3.3878
Iteration: 1056; Percent complete: 26.4%; Average loss: 3.4420
Iteration: 1057; Percent complete: 26.4%; Average loss: 3.3082
Iteration: 1058; Percent complete: 26.5%; Average loss: 3.6419
Iteration: 1059; Percent complete: 26.5%; Average loss: 3.2613
Iteration: 1060; Percent complete: 26.5%; Average loss: 3.6162
Iteration: 1061; Percent complete: 26.5%; Average loss: 3.2279
Iteration: 1062; Percent complete: 26.6%; Average loss: 3.4647
Iteration: 1063; Percent complete: 26.6%; Average loss: 3.2291
Iteration: 1064; Percent complete: 26.6%; Average loss: 3.4302
Iteration: 1065; Percent complete: 26.6%; Average loss: 3.2198
Iteration: 1066; Percent complete: 26.7%; Average loss: 3.5504
Iteration: 1067; Percent complete: 26.7%; Average loss: 3.5475
Iteration: 1068; Percent complete: 26.7%; Average loss: 3.3220
Iteration: 1069; Percent complete: 26.7%; Average loss: 3.3161
Iteration: 1070; Percent complete: 26.8%; Average loss: 3.5123
Iteration: 1071; Percent complete: 26.8%; Average loss: 3.5566
Iteration: 1072; Percent complete: 26.8%; Average loss: 3.2613
Iteration: 1073; Percent complete: 26.8%; Average loss: 3.2600
Iteration: 1074; Percent complete: 26.9%; Average loss: 3.2983
Iteration: 1075; Percent complete: 26.9%; Average loss: 3.5248
Iteration: 1076; Percent complete: 26.9%; Average loss: 3.4831
Iteration: 1077; Percent complete: 26.9%; Average loss: 3.4946
Iteration: 1078; Percent complete: 27.0%; Average loss: 3.5266
Iteration: 1079; Percent complete: 27.0%; Average loss: 3.3361
Iteration: 1080; Percent complete: 27.0%; Average loss: 3.1770
Iteration: 1081; Percent complete: 27.0%; Average loss: 3.4720
Iteration: 1082; Percent complete: 27.1%; Average loss: 3.2981
Iteration: 1083; Percent complete: 27.1%; Average loss: 3.3841
Iteration: 1084; Percent complete: 27.1%; Average loss: 3.4711
Iteration: 1085; Percent complete: 27.1%; Average loss: 3.3238
Iteration: 1086; Percent complete: 27.2%; Average loss: 3.3971
Iteration: 1087; Percent complete: 27.2%; Average loss: 3.4090
Iteration: 1088; Percent complete: 27.2%; Average loss: 3.3703
Iteration: 1089; Percent complete: 27.2%; Average loss: 3.4217
Iteration: 1090; Percent complete: 27.3%; Average loss: 3.5441
Iteration: 1091; Percent complete: 27.3%; Average loss: 3.1727
Iteration: 1092; Percent complete: 27.3%; Average loss: 3.4980
Iteration: 1093; Percent complete: 27.3%; Average loss: 3.5585
Iteration: 1094; Percent complete: 27.4%; Average loss: 3.4125
Iteration: 1095; Percent complete: 27.4%; Average loss: 3.3813
Iteration: 1096; Percent complete: 27.4%; Average loss: 3.5643
Iteration: 1097; Percent complete: 27.4%; Average loss: 3.4122
Iteration: 1098; Percent complete: 27.5%; Average loss: 3.4458
Iteration: 1099; Percent complete: 27.5%; Average loss: 3.4134
Iteration: 1100; Percent complete: 27.5%; Average loss: 3.2381
Iteration: 1101; Percent complete: 27.5%; Average loss: 3.4913
Iteration: 1102; Percent complete: 27.6%; Average loss: 3.1517
Iteration: 1103; Percent complete: 27.6%; Average loss: 3.4470
Iteration: 1104; Percent complete: 27.6%; Average loss: 3.4220
Iteration: 1105; Percent complete: 27.6%; Average loss: 3.4555
Iteration: 1106; Percent complete: 27.7%; Average loss: 3.6539
Iteration: 1107; Percent complete: 27.7%; Average loss: 3.4542
Iteration: 1108; Percent complete: 27.7%; Average loss: 3.2190
Iteration: 1109; Percent complete: 27.7%; Average loss: 3.5509
Iteration: 1110; Percent complete: 27.8%; Average loss: 3.5056
Iteration: 1111; Percent complete: 27.8%; Average loss: 3.5611
Iteration: 1112; Percent complete: 27.8%; Average loss: 3.4565
Iteration: 1113; Percent complete: 27.8%; Average loss: 3.4261
Iteration: 1114; Percent complete: 27.9%; Average loss: 3.3002
Iteration: 1115; Percent complete: 27.9%; Average loss: 3.4294
Iteration: 1116; Percent complete: 27.9%; Average loss: 3.3823
Iteration: 1117; Percent complete: 27.9%; Average loss: 3.3264
Iteration: 1118; Percent complete: 28.0%; Average loss: 3.4244
Iteration: 1119; Percent complete: 28.0%; Average loss: 3.7376
Iteration: 1120; Percent complete: 28.0%; Average loss: 3.4268
Iteration: 1121; Percent complete: 28.0%; Average loss: 3.3059
Iteration: 1122; Percent complete: 28.1%; Average loss: 3.3597
Iteration: 1123; Percent complete: 28.1%; Average loss: 3.5500
Iteration: 1124; Percent complete: 28.1%; Average loss: 3.1763
Iteration: 1125; Percent complete: 28.1%; Average loss: 3.2178
Iteration: 1126; Percent complete: 28.1%; Average loss: 3.3972
Iteration: 1127; Percent complete: 28.2%; Average loss: 3.5206
Iteration: 1128; Percent complete: 28.2%; Average loss: 3.6414
Iteration: 1129; Percent complete: 28.2%; Average loss: 3.4850
Iteration: 1130; Percent complete: 28.2%; Average loss: 3.5649
Iteration: 1131; Percent complete: 28.3%; Average loss: 3.1183
Iteration: 1132; Percent complete: 28.3%; Average loss: 3.4441
Iteration: 1133; Percent complete: 28.3%; Average loss: 3.5498
Iteration: 1134; Percent complete: 28.3%; Average loss: 3.4529
Iteration: 1135; Percent complete: 28.4%; Average loss: 3.4909
Iteration: 1136; Percent complete: 28.4%; Average loss: 3.4876
Iteration: 1137; Percent complete: 28.4%; Average loss: 3.3263
Iteration: 1138; Percent complete: 28.4%; Average loss: 3.3523
Iteration: 1139; Percent complete: 28.5%; Average loss: 3.5379
Iteration: 1140; Percent complete: 28.5%; Average loss: 3.5178
Iteration: 1141; Percent complete: 28.5%; Average loss: 3.5047
Iteration: 1142; Percent complete: 28.5%; Average loss: 3.3242
Iteration: 1143; Percent complete: 28.6%; Average loss: 3.4016
Iteration: 1144; Percent complete: 28.6%; Average loss: 3.4480
Iteration: 1145; Percent complete: 28.6%; Average loss: 3.3096
Iteration: 1146; Percent complete: 28.6%; Average loss: 3.3509
Iteration: 1147; Percent complete: 28.7%; Average loss: 3.3645
Iteration: 1148; Percent complete: 28.7%; Average loss: 3.0403
Iteration: 1149; Percent complete: 28.7%; Average loss: 3.4162
Iteration: 1150; Percent complete: 28.7%; Average loss: 3.5092
Iteration: 1151; Percent complete: 28.8%; Average loss: 3.3332
Iteration: 1152; Percent complete: 28.8%; Average loss: 3.1975
Iteration: 1153; Percent complete: 28.8%; Average loss: 3.6462
Iteration: 1154; Percent complete: 28.8%; Average loss: 3.5101
Iteration: 1155; Percent complete: 28.9%; Average loss: 3.3488
Iteration: 1156; Percent complete: 28.9%; Average loss: 3.4219
Iteration: 1157; Percent complete: 28.9%; Average loss: 3.2798
Iteration: 1158; Percent complete: 28.9%; Average loss: 3.6036
Iteration: 1159; Percent complete: 29.0%; Average loss: 3.3310
Iteration: 1160; Percent complete: 29.0%; Average loss: 3.2711
Iteration: 1161; Percent complete: 29.0%; Average loss: 3.4605
Iteration: 1162; Percent complete: 29.0%; Average loss: 3.0622
Iteration: 1163; Percent complete: 29.1%; Average loss: 3.4337
Iteration: 1164; Percent complete: 29.1%; Average loss: 3.2529
Iteration: 1165; Percent complete: 29.1%; Average loss: 3.3851
Iteration: 1166; Percent complete: 29.1%; Average loss: 3.5376
Iteration: 1167; Percent complete: 29.2%; Average loss: 3.4511
Iteration: 1168; Percent complete: 29.2%; Average loss: 3.3334
Iteration: 1169; Percent complete: 29.2%; Average loss: 3.6662
Iteration: 1170; Percent complete: 29.2%; Average loss: 3.1853
Iteration: 1171; Percent complete: 29.3%; Average loss: 3.3107
Iteration: 1172; Percent complete: 29.3%; Average loss: 3.4710
Iteration: 1173; Percent complete: 29.3%; Average loss: 3.4961
Iteration: 1174; Percent complete: 29.3%; Average loss: 3.3463
Iteration: 1175; Percent complete: 29.4%; Average loss: 3.3408
Iteration: 1176; Percent complete: 29.4%; Average loss: 3.7867
Iteration: 1177; Percent complete: 29.4%; Average loss: 3.3250
Iteration: 1178; Percent complete: 29.4%; Average loss: 3.4048
Iteration: 1179; Percent complete: 29.5%; Average loss: 3.4313
Iteration: 1180; Percent complete: 29.5%; Average loss: 3.3420
Iteration: 1181; Percent complete: 29.5%; Average loss: 3.3472
Iteration: 1182; Percent complete: 29.5%; Average loss: 3.2070
Iteration: 1183; Percent complete: 29.6%; Average loss: 3.7777
Iteration: 1184; Percent complete: 29.6%; Average loss: 3.3820
Iteration: 1185; Percent complete: 29.6%; Average loss: 3.3602
Iteration: 1186; Percent complete: 29.6%; Average loss: 3.5427
Iteration: 1187; Percent complete: 29.7%; Average loss: 3.5581
Iteration: 1188; Percent complete: 29.7%; Average loss: 3.3267
Iteration: 1189; Percent complete: 29.7%; Average loss: 2.9495
Iteration: 1190; Percent complete: 29.8%; Average loss: 3.3402
Iteration: 1191; Percent complete: 29.8%; Average loss: 3.1244
Iteration: 1192; Percent complete: 29.8%; Average loss: 3.3380
Iteration: 1193; Percent complete: 29.8%; Average loss: 3.3026
Iteration: 1194; Percent complete: 29.8%; Average loss: 3.3542
Iteration: 1195; Percent complete: 29.9%; Average loss: 3.2996
Iteration: 1196; Percent complete: 29.9%; Average loss: 3.4705
Iteration: 1197; Percent complete: 29.9%; Average loss: 3.5132
Iteration: 1198; Percent complete: 29.9%; Average loss: 3.3011
Iteration: 1199; Percent complete: 30.0%; Average loss: 3.2441
Iteration: 1200; Percent complete: 30.0%; Average loss: 3.3968
Iteration: 1201; Percent complete: 30.0%; Average loss: 3.4269
Iteration: 1202; Percent complete: 30.0%; Average loss: 3.0726
Iteration: 1203; Percent complete: 30.1%; Average loss: 3.2254
Iteration: 1204; Percent complete: 30.1%; Average loss: 3.3913
Iteration: 1205; Percent complete: 30.1%; Average loss: 3.5630
Iteration: 1206; Percent complete: 30.1%; Average loss: 3.4360
Iteration: 1207; Percent complete: 30.2%; Average loss: 3.3500
Iteration: 1208; Percent complete: 30.2%; Average loss: 3.5105
Iteration: 1209; Percent complete: 30.2%; Average loss: 3.5038
Iteration: 1210; Percent complete: 30.2%; Average loss: 3.2796
Iteration: 1211; Percent complete: 30.3%; Average loss: 3.3600
Iteration: 1212; Percent complete: 30.3%; Average loss: 3.3548
Iteration: 1213; Percent complete: 30.3%; Average loss: 3.1918
Iteration: 1214; Percent complete: 30.3%; Average loss: 3.2629
Iteration: 1215; Percent complete: 30.4%; Average loss: 3.2284
Iteration: 1216; Percent complete: 30.4%; Average loss: 3.4864
Iteration: 1217; Percent complete: 30.4%; Average loss: 3.2173
Iteration: 1218; Percent complete: 30.4%; Average loss: 3.4351
Iteration: 1219; Percent complete: 30.5%; Average loss: 3.5482
Iteration: 1220; Percent complete: 30.5%; Average loss: 3.2276
Iteration: 1221; Percent complete: 30.5%; Average loss: 3.1746
Iteration: 1222; Percent complete: 30.6%; Average loss: 3.1899
Iteration: 1223; Percent complete: 30.6%; Average loss: 3.1802
Iteration: 1224; Percent complete: 30.6%; Average loss: 3.1957
Iteration: 1225; Percent complete: 30.6%; Average loss: 3.5805
Iteration: 1226; Percent complete: 30.6%; Average loss: 3.4425
Iteration: 1227; Percent complete: 30.7%; Average loss: 3.3218
Iteration: 1228; Percent complete: 30.7%; Average loss: 3.3296
Iteration: 1229; Percent complete: 30.7%; Average loss: 3.0346
Iteration: 1230; Percent complete: 30.8%; Average loss: 3.2599
Iteration: 1231; Percent complete: 30.8%; Average loss: 3.0039
Iteration: 1232; Percent complete: 30.8%; Average loss: 3.4266
Iteration: 1233; Percent complete: 30.8%; Average loss: 3.5409
Iteration: 1234; Percent complete: 30.9%; Average loss: 3.5018
Iteration: 1235; Percent complete: 30.9%; Average loss: 3.2463
Iteration: 1236; Percent complete: 30.9%; Average loss: 3.3117
Iteration: 1237; Percent complete: 30.9%; Average loss: 3.3434
Iteration: 1238; Percent complete: 30.9%; Average loss: 3.2203
Iteration: 1239; Percent complete: 31.0%; Average loss: 3.2203
Iteration: 1240; Percent complete: 31.0%; Average loss: 3.4984
Iteration: 1241; Percent complete: 31.0%; Average loss: 3.4080
Iteration: 1242; Percent complete: 31.1%; Average loss: 3.3214
Iteration: 1243; Percent complete: 31.1%; Average loss: 3.4004
Iteration: 1244; Percent complete: 31.1%; Average loss: 3.2304
Iteration: 1245; Percent complete: 31.1%; Average loss: 3.3270
Iteration: 1246; Percent complete: 31.1%; Average loss: 3.2574
Iteration: 1247; Percent complete: 31.2%; Average loss: 3.2018
Iteration: 1248; Percent complete: 31.2%; Average loss: 3.4061
Iteration: 1249; Percent complete: 31.2%; Average loss: 3.3894
Iteration: 1250; Percent complete: 31.2%; Average loss: 3.2838
Iteration: 1251; Percent complete: 31.3%; Average loss: 3.2936
Iteration: 1252; Percent complete: 31.3%; Average loss: 3.1343
Iteration: 1253; Percent complete: 31.3%; Average loss: 3.6057
Iteration: 1254; Percent complete: 31.4%; Average loss: 3.4842
Iteration: 1255; Percent complete: 31.4%; Average loss: 3.3544
Iteration: 1256; Percent complete: 31.4%; Average loss: 3.4036
Iteration: 1257; Percent complete: 31.4%; Average loss: 3.2920
Iteration: 1258; Percent complete: 31.4%; Average loss: 3.5674
Iteration: 1259; Percent complete: 31.5%; Average loss: 3.2772
Iteration: 1260; Percent complete: 31.5%; Average loss: 3.2308
Iteration: 1261; Percent complete: 31.5%; Average loss: 3.5208
Iteration: 1262; Percent complete: 31.6%; Average loss: 3.5806
Iteration: 1263; Percent complete: 31.6%; Average loss: 3.3279
Iteration: 1264; Percent complete: 31.6%; Average loss: 3.6041
Iteration: 1265; Percent complete: 31.6%; Average loss: 3.4696
Iteration: 1266; Percent complete: 31.6%; Average loss: 3.5877
Iteration: 1267; Percent complete: 31.7%; Average loss: 3.5034
Iteration: 1268; Percent complete: 31.7%; Average loss: 3.3956
Iteration: 1269; Percent complete: 31.7%; Average loss: 3.3295
Iteration: 1270; Percent complete: 31.8%; Average loss: 3.2503
Iteration: 1271; Percent complete: 31.8%; Average loss: 3.3542
Iteration: 1272; Percent complete: 31.8%; Average loss: 3.4671
Iteration: 1273; Percent complete: 31.8%; Average loss: 3.7014
Iteration: 1274; Percent complete: 31.9%; Average loss: 3.4618
Iteration: 1275; Percent complete: 31.9%; Average loss: 3.2852
Iteration: 1276; Percent complete: 31.9%; Average loss: 3.4935
Iteration: 1277; Percent complete: 31.9%; Average loss: 3.2957
Iteration: 1278; Percent complete: 31.9%; Average loss: 3.4253
Iteration: 1279; Percent complete: 32.0%; Average loss: 3.5858
Iteration: 1280; Percent complete: 32.0%; Average loss: 3.2989
Iteration: 1281; Percent complete: 32.0%; Average loss: 3.2142
Iteration: 1282; Percent complete: 32.0%; Average loss: 3.2897
Iteration: 1283; Percent complete: 32.1%; Average loss: 3.1443
Iteration: 1284; Percent complete: 32.1%; Average loss: 3.1639
Iteration: 1285; Percent complete: 32.1%; Average loss: 3.4735
Iteration: 1286; Percent complete: 32.1%; Average loss: 3.2425
Iteration: 1287; Percent complete: 32.2%; Average loss: 3.2299
Iteration: 1288; Percent complete: 32.2%; Average loss: 3.5765
Iteration: 1289; Percent complete: 32.2%; Average loss: 3.4438
Iteration: 1290; Percent complete: 32.2%; Average loss: 3.2792
Iteration: 1291; Percent complete: 32.3%; Average loss: 3.2577
Iteration: 1292; Percent complete: 32.3%; Average loss: 3.3110
Iteration: 1293; Percent complete: 32.3%; Average loss: 3.3394
Iteration: 1294; Percent complete: 32.4%; Average loss: 3.3840
Iteration: 1295; Percent complete: 32.4%; Average loss: 3.3229
Iteration: 1296; Percent complete: 32.4%; Average loss: 3.5759
Iteration: 1297; Percent complete: 32.4%; Average loss: 3.3809
Iteration: 1298; Percent complete: 32.5%; Average loss: 3.5260
Iteration: 1299; Percent complete: 32.5%; Average loss: 3.2497
Iteration: 1300; Percent complete: 32.5%; Average loss: 3.4900
Iteration: 1301; Percent complete: 32.5%; Average loss: 3.5324
Iteration: 1302; Percent complete: 32.6%; Average loss: 3.1717
Iteration: 1303; Percent complete: 32.6%; Average loss: 3.1907
Iteration: 1304; Percent complete: 32.6%; Average loss: 3.3797
Iteration: 1305; Percent complete: 32.6%; Average loss: 3.4008
Iteration: 1306; Percent complete: 32.6%; Average loss: 3.4701
Iteration: 1307; Percent complete: 32.7%; Average loss: 3.3527
Iteration: 1308; Percent complete: 32.7%; Average loss: 3.4559
Iteration: 1309; Percent complete: 32.7%; Average loss: 3.1152
Iteration: 1310; Percent complete: 32.8%; Average loss: 3.4286
Iteration: 1311; Percent complete: 32.8%; Average loss: 3.4445
Iteration: 1312; Percent complete: 32.8%; Average loss: 3.5338
Iteration: 1313; Percent complete: 32.8%; Average loss: 3.4433
Iteration: 1314; Percent complete: 32.9%; Average loss: 3.5451
Iteration: 1315; Percent complete: 32.9%; Average loss: 3.3478
Iteration: 1316; Percent complete: 32.9%; Average loss: 3.3365
Iteration: 1317; Percent complete: 32.9%; Average loss: 3.4804
Iteration: 1318; Percent complete: 33.0%; Average loss: 3.4385
Iteration: 1319; Percent complete: 33.0%; Average loss: 3.3732
Iteration: 1320; Percent complete: 33.0%; Average loss: 3.4694
Iteration: 1321; Percent complete: 33.0%; Average loss: 3.5645
Iteration: 1322; Percent complete: 33.1%; Average loss: 3.3283
Iteration: 1323; Percent complete: 33.1%; Average loss: 3.4640
Iteration: 1324; Percent complete: 33.1%; Average loss: 3.2182
Iteration: 1325; Percent complete: 33.1%; Average loss: 3.1296
Iteration: 1326; Percent complete: 33.1%; Average loss: 3.2424
Iteration: 1327; Percent complete: 33.2%; Average loss: 3.3189
Iteration: 1328; Percent complete: 33.2%; Average loss: 3.4559
Iteration: 1329; Percent complete: 33.2%; Average loss: 3.5747
Iteration: 1330; Percent complete: 33.2%; Average loss: 3.3570
Iteration: 1331; Percent complete: 33.3%; Average loss: 3.2298
Iteration: 1332; Percent complete: 33.3%; Average loss: 3.2086
Iteration: 1333; Percent complete: 33.3%; Average loss: 3.3567
Iteration: 1334; Percent complete: 33.4%; Average loss: 3.3970
Iteration: 1335; Percent complete: 33.4%; Average loss: 3.5023
Iteration: 1336; Percent complete: 33.4%; Average loss: 3.4633
Iteration: 1337; Percent complete: 33.4%; Average loss: 3.3942
Iteration: 1338; Percent complete: 33.5%; Average loss: 3.5711
Iteration: 1339; Percent complete: 33.5%; Average loss: 3.1918
Iteration: 1340; Percent complete: 33.5%; Average loss: 3.7675
Iteration: 1341; Percent complete: 33.5%; Average loss: 3.3501
Iteration: 1342; Percent complete: 33.6%; Average loss: 3.3911
Iteration: 1343; Percent complete: 33.6%; Average loss: 3.0648
Iteration: 1344; Percent complete: 33.6%; Average loss: 3.4739
Iteration: 1345; Percent complete: 33.6%; Average loss: 3.0459
Iteration: 1346; Percent complete: 33.7%; Average loss: 3.5033
Iteration: 1347; Percent complete: 33.7%; Average loss: 3.4817
Iteration: 1348; Percent complete: 33.7%; Average loss: 3.2747
Iteration: 1349; Percent complete: 33.7%; Average loss: 3.3942
Iteration: 1350; Percent complete: 33.8%; Average loss: 3.2682
Iteration: 1351; Percent complete: 33.8%; Average loss: 3.3755
Iteration: 1352; Percent complete: 33.8%; Average loss: 3.5390
Iteration: 1353; Percent complete: 33.8%; Average loss: 3.6284
Iteration: 1354; Percent complete: 33.9%; Average loss: 3.2616
Iteration: 1355; Percent complete: 33.9%; Average loss: 3.1460
Iteration: 1356; Percent complete: 33.9%; Average loss: 3.3595
Iteration: 1357; Percent complete: 33.9%; Average loss: 3.2046
Iteration: 1358; Percent complete: 34.0%; Average loss: 3.3462
Iteration: 1359; Percent complete: 34.0%; Average loss: 3.2004
Iteration: 1360; Percent complete: 34.0%; Average loss: 3.7134
Iteration: 1361; Percent complete: 34.0%; Average loss: 3.4980
Iteration: 1362; Percent complete: 34.1%; Average loss: 3.1875
Iteration: 1363; Percent complete: 34.1%; Average loss: 3.5243
Iteration: 1364; Percent complete: 34.1%; Average loss: 3.3047
Iteration: 1365; Percent complete: 34.1%; Average loss: 3.3089
Iteration: 1366; Percent complete: 34.2%; Average loss: 3.3313
Iteration: 1367; Percent complete: 34.2%; Average loss: 3.1715
Iteration: 1368; Percent complete: 34.2%; Average loss: 3.1487
Iteration: 1369; Percent complete: 34.2%; Average loss: 3.6518
Iteration: 1370; Percent complete: 34.2%; Average loss: 3.1981
Iteration: 1371; Percent complete: 34.3%; Average loss: 3.4787
Iteration: 1372; Percent complete: 34.3%; Average loss: 3.2417
Iteration: 1373; Percent complete: 34.3%; Average loss: 3.3655
Iteration: 1374; Percent complete: 34.4%; Average loss: 3.3504
Iteration: 1375; Percent complete: 34.4%; Average loss: 3.7146
Iteration: 1376; Percent complete: 34.4%; Average loss: 3.2491
Iteration: 1377; Percent complete: 34.4%; Average loss: 3.1569
Iteration: 1378; Percent complete: 34.4%; Average loss: 3.1847
Iteration: 1379; Percent complete: 34.5%; Average loss: 3.1899
Iteration: 1380; Percent complete: 34.5%; Average loss: 3.3218
Iteration: 1381; Percent complete: 34.5%; Average loss: 2.8744
Iteration: 1382; Percent complete: 34.5%; Average loss: 3.3394
Iteration: 1383; Percent complete: 34.6%; Average loss: 3.3353
Iteration: 1384; Percent complete: 34.6%; Average loss: 3.2661
Iteration: 1385; Percent complete: 34.6%; Average loss: 3.3386
Iteration: 1386; Percent complete: 34.6%; Average loss: 3.5226
Iteration: 1387; Percent complete: 34.7%; Average loss: 3.1939
Iteration: 1388; Percent complete: 34.7%; Average loss: 3.1632
Iteration: 1389; Percent complete: 34.7%; Average loss: 3.5059
Iteration: 1390; Percent complete: 34.8%; Average loss: 3.4053
Iteration: 1391; Percent complete: 34.8%; Average loss: 3.1992
Iteration: 1392; Percent complete: 34.8%; Average loss: 3.2879
Iteration: 1393; Percent complete: 34.8%; Average loss: 3.3355
Iteration: 1394; Percent complete: 34.8%; Average loss: 3.3469
Iteration: 1395; Percent complete: 34.9%; Average loss: 2.9802
Iteration: 1396; Percent complete: 34.9%; Average loss: 3.3676
Iteration: 1397; Percent complete: 34.9%; Average loss: 3.1797
Iteration: 1398; Percent complete: 34.9%; Average loss: 3.4565
Iteration: 1399; Percent complete: 35.0%; Average loss: 3.3540
Iteration: 1400; Percent complete: 35.0%; Average loss: 3.0152
Iteration: 1401; Percent complete: 35.0%; Average loss: 3.3152
Iteration: 1402; Percent complete: 35.0%; Average loss: 3.3814
Iteration: 1403; Percent complete: 35.1%; Average loss: 3.3526
Iteration: 1404; Percent complete: 35.1%; Average loss: 3.4196
Iteration: 1405; Percent complete: 35.1%; Average loss: 3.1984
Iteration: 1406; Percent complete: 35.1%; Average loss: 3.2227
Iteration: 1407; Percent complete: 35.2%; Average loss: 3.2787
Iteration: 1408; Percent complete: 35.2%; Average loss: 3.4897
Iteration: 1409; Percent complete: 35.2%; Average loss: 3.3251
Iteration: 1410; Percent complete: 35.2%; Average loss: 3.5140
Iteration: 1411; Percent complete: 35.3%; Average loss: 3.2053
Iteration: 1412; Percent complete: 35.3%; Average loss: 3.4096
Iteration: 1413; Percent complete: 35.3%; Average loss: 3.3620
Iteration: 1414; Percent complete: 35.4%; Average loss: 3.5113
Iteration: 1415; Percent complete: 35.4%; Average loss: 3.4358
Iteration: 1416; Percent complete: 35.4%; Average loss: 3.3445
Iteration: 1417; Percent complete: 35.4%; Average loss: 3.3661
Iteration: 1418; Percent complete: 35.4%; Average loss: 3.2316
Iteration: 1419; Percent complete: 35.5%; Average loss: 3.3744
Iteration: 1420; Percent complete: 35.5%; Average loss: 3.1099
Iteration: 1421; Percent complete: 35.5%; Average loss: 3.2005
Iteration: 1422; Percent complete: 35.5%; Average loss: 3.3431
Iteration: 1423; Percent complete: 35.6%; Average loss: 3.4347
Iteration: 1424; Percent complete: 35.6%; Average loss: 3.4470
Iteration: 1425; Percent complete: 35.6%; Average loss: 3.4663
Iteration: 1426; Percent complete: 35.6%; Average loss: 3.2668
Iteration: 1427; Percent complete: 35.7%; Average loss: 3.5101
Iteration: 1428; Percent complete: 35.7%; Average loss: 3.3620
Iteration: 1429; Percent complete: 35.7%; Average loss: 3.2725
Iteration: 1430; Percent complete: 35.8%; Average loss: 3.1537
Iteration: 1431; Percent complete: 35.8%; Average loss: 3.1187
Iteration: 1432; Percent complete: 35.8%; Average loss: 3.3632
Iteration: 1433; Percent complete: 35.8%; Average loss: 3.3400
Iteration: 1434; Percent complete: 35.9%; Average loss: 3.2082
Iteration: 1435; Percent complete: 35.9%; Average loss: 3.4569
Iteration: 1436; Percent complete: 35.9%; Average loss: 3.3680
Iteration: 1437; Percent complete: 35.9%; Average loss: 3.3373
Iteration: 1438; Percent complete: 35.9%; Average loss: 3.3454
Iteration: 1439; Percent complete: 36.0%; Average loss: 3.1206
Iteration: 1440; Percent complete: 36.0%; Average loss: 3.3912
Iteration: 1441; Percent complete: 36.0%; Average loss: 3.2256
Iteration: 1442; Percent complete: 36.0%; Average loss: 3.2774
Iteration: 1443; Percent complete: 36.1%; Average loss: 3.0757
Iteration: 1444; Percent complete: 36.1%; Average loss: 3.1110
Iteration: 1445; Percent complete: 36.1%; Average loss: 3.3830
Iteration: 1446; Percent complete: 36.1%; Average loss: 3.5598
Iteration: 1447; Percent complete: 36.2%; Average loss: 3.4658
Iteration: 1448; Percent complete: 36.2%; Average loss: 3.3327
Iteration: 1449; Percent complete: 36.2%; Average loss: 3.3765
Iteration: 1450; Percent complete: 36.2%; Average loss: 3.6995
Iteration: 1451; Percent complete: 36.3%; Average loss: 3.3534
Iteration: 1452; Percent complete: 36.3%; Average loss: 3.2928
Iteration: 1453; Percent complete: 36.3%; Average loss: 3.5244
Iteration: 1454; Percent complete: 36.4%; Average loss: 3.4072
Iteration: 1455; Percent complete: 36.4%; Average loss: 3.2070
Iteration: 1456; Percent complete: 36.4%; Average loss: 3.2394
Iteration: 1457; Percent complete: 36.4%; Average loss: 3.5008
Iteration: 1458; Percent complete: 36.4%; Average loss: 3.5435
Iteration: 1459; Percent complete: 36.5%; Average loss: 3.0663
Iteration: 1460; Percent complete: 36.5%; Average loss: 3.3370
Iteration: 1461; Percent complete: 36.5%; Average loss: 3.4575
Iteration: 1462; Percent complete: 36.5%; Average loss: 3.0482
Iteration: 1463; Percent complete: 36.6%; Average loss: 3.1630
Iteration: 1464; Percent complete: 36.6%; Average loss: 3.0725
Iteration: 1465; Percent complete: 36.6%; Average loss: 3.4388
Iteration: 1466; Percent complete: 36.6%; Average loss: 3.3757
Iteration: 1467; Percent complete: 36.7%; Average loss: 3.2742
Iteration: 1468; Percent complete: 36.7%; Average loss: 3.2295
Iteration: 1469; Percent complete: 36.7%; Average loss: 3.3428
Iteration: 1470; Percent complete: 36.8%; Average loss: 3.4175
Iteration: 1471; Percent complete: 36.8%; Average loss: 3.1565
Iteration: 1472; Percent complete: 36.8%; Average loss: 3.0171
Iteration: 1473; Percent complete: 36.8%; Average loss: 3.2797
Iteration: 1474; Percent complete: 36.9%; Average loss: 3.2147
Iteration: 1475; Percent complete: 36.9%; Average loss: 3.4493
Iteration: 1476; Percent complete: 36.9%; Average loss: 3.1727
Iteration: 1477; Percent complete: 36.9%; Average loss: 3.5340
Iteration: 1478; Percent complete: 37.0%; Average loss: 3.4036
Iteration: 1479; Percent complete: 37.0%; Average loss: 3.5663
Iteration: 1480; Percent complete: 37.0%; Average loss: 3.4118
Iteration: 1481; Percent complete: 37.0%; Average loss: 3.4496
Iteration: 1482; Percent complete: 37.0%; Average loss: 3.2222
Iteration: 1483; Percent complete: 37.1%; Average loss: 3.1389
Iteration: 1484; Percent complete: 37.1%; Average loss: 3.6878
Iteration: 1485; Percent complete: 37.1%; Average loss: 3.3569
Iteration: 1486; Percent complete: 37.1%; Average loss: 3.6019
Iteration: 1487; Percent complete: 37.2%; Average loss: 3.2034
Iteration: 1488; Percent complete: 37.2%; Average loss: 3.4469
Iteration: 1489; Percent complete: 37.2%; Average loss: 3.1109
Iteration: 1490; Percent complete: 37.2%; Average loss: 3.3127
Iteration: 1491; Percent complete: 37.3%; Average loss: 3.3164
Iteration: 1492; Percent complete: 37.3%; Average loss: 3.1305
Iteration: 1493; Percent complete: 37.3%; Average loss: 3.1823
Iteration: 1494; Percent complete: 37.4%; Average loss: 3.4409
Iteration: 1495; Percent complete: 37.4%; Average loss: 3.1927
Iteration: 1496; Percent complete: 37.4%; Average loss: 3.3628
Iteration: 1497; Percent complete: 37.4%; Average loss: 3.2558
Iteration: 1498; Percent complete: 37.5%; Average loss: 3.2053
Iteration: 1499; Percent complete: 37.5%; Average loss: 3.4432
Iteration: 1500; Percent complete: 37.5%; Average loss: 3.2870
Iteration: 1501; Percent complete: 37.5%; Average loss: 3.3913
Iteration: 1502; Percent complete: 37.5%; Average loss: 3.2613
Iteration: 1503; Percent complete: 37.6%; Average loss: 3.2459
Iteration: 1504; Percent complete: 37.6%; Average loss: 3.5282
Iteration: 1505; Percent complete: 37.6%; Average loss: 3.1134
Iteration: 1506; Percent complete: 37.6%; Average loss: 3.1875
Iteration: 1507; Percent complete: 37.7%; Average loss: 3.4108
Iteration: 1508; Percent complete: 37.7%; Average loss: 3.4708
Iteration: 1509; Percent complete: 37.7%; Average loss: 3.1690
Iteration: 1510; Percent complete: 37.8%; Average loss: 3.0648
Iteration: 1511; Percent complete: 37.8%; Average loss: 3.3554
Iteration: 1512; Percent complete: 37.8%; Average loss: 3.2344
Iteration: 1513; Percent complete: 37.8%; Average loss: 3.3933
Iteration: 1514; Percent complete: 37.9%; Average loss: 3.3327
Iteration: 1515; Percent complete: 37.9%; Average loss: 3.3665
Iteration: 1516; Percent complete: 37.9%; Average loss: 3.2950
Iteration: 1517; Percent complete: 37.9%; Average loss: 3.2231
Iteration: 1518; Percent complete: 38.0%; Average loss: 3.4935
Iteration: 1519; Percent complete: 38.0%; Average loss: 3.0941
Iteration: 1520; Percent complete: 38.0%; Average loss: 3.2174
Iteration: 1521; Percent complete: 38.0%; Average loss: 3.3388
Iteration: 1522; Percent complete: 38.0%; Average loss: 3.2847
Iteration: 1523; Percent complete: 38.1%; Average loss: 3.1845
Iteration: 1524; Percent complete: 38.1%; Average loss: 3.1207
Iteration: 1525; Percent complete: 38.1%; Average loss: 3.4555
Iteration: 1526; Percent complete: 38.1%; Average loss: 3.0821
Iteration: 1527; Percent complete: 38.2%; Average loss: 3.2155
Iteration: 1528; Percent complete: 38.2%; Average loss: 3.0081
Iteration: 1529; Percent complete: 38.2%; Average loss: 3.0838
Iteration: 1530; Percent complete: 38.2%; Average loss: 3.4137
Iteration: 1531; Percent complete: 38.3%; Average loss: 3.1732
Iteration: 1532; Percent complete: 38.3%; Average loss: 3.4407
Iteration: 1533; Percent complete: 38.3%; Average loss: 2.9579
Iteration: 1534; Percent complete: 38.4%; Average loss: 3.3627
Iteration: 1535; Percent complete: 38.4%; Average loss: 3.1683
Iteration: 1536; Percent complete: 38.4%; Average loss: 3.2936
Iteration: 1537; Percent complete: 38.4%; Average loss: 3.3818
Iteration: 1538; Percent complete: 38.5%; Average loss: 3.2243
Iteration: 1539; Percent complete: 38.5%; Average loss: 3.2794
Iteration: 1540; Percent complete: 38.5%; Average loss: 3.4776
Iteration: 1541; Percent complete: 38.5%; Average loss: 3.4206
Iteration: 1542; Percent complete: 38.6%; Average loss: 3.1979
Iteration: 1543; Percent complete: 38.6%; Average loss: 3.3228
Iteration: 1544; Percent complete: 38.6%; Average loss: 3.1918
Iteration: 1545; Percent complete: 38.6%; Average loss: 3.3524
Iteration: 1546; Percent complete: 38.6%; Average loss: 3.3426
Iteration: 1547; Percent complete: 38.7%; Average loss: 2.9194
Iteration: 1548; Percent complete: 38.7%; Average loss: 2.9914
Iteration: 1549; Percent complete: 38.7%; Average loss: 3.3071
Iteration: 1550; Percent complete: 38.8%; Average loss: 3.4022
Iteration: 1551; Percent complete: 38.8%; Average loss: 3.2712
Iteration: 1552; Percent complete: 38.8%; Average loss: 3.4993
Iteration: 1553; Percent complete: 38.8%; Average loss: 3.1479
Iteration: 1554; Percent complete: 38.9%; Average loss: 3.0963
Iteration: 1555; Percent complete: 38.9%; Average loss: 3.1962
Iteration: 1556; Percent complete: 38.9%; Average loss: 3.1984
Iteration: 1557; Percent complete: 38.9%; Average loss: 3.5228
Iteration: 1558; Percent complete: 39.0%; Average loss: 3.3644
Iteration: 1559; Percent complete: 39.0%; Average loss: 3.1306
Iteration: 1560; Percent complete: 39.0%; Average loss: 3.2523
Iteration: 1561; Percent complete: 39.0%; Average loss: 3.2897
Iteration: 1562; Percent complete: 39.1%; Average loss: 3.2484
Iteration: 1563; Percent complete: 39.1%; Average loss: 3.4739
Iteration: 1564; Percent complete: 39.1%; Average loss: 3.5070
Iteration: 1565; Percent complete: 39.1%; Average loss: 3.2506
Iteration: 1566; Percent complete: 39.1%; Average loss: 3.2271
Iteration: 1567; Percent complete: 39.2%; Average loss: 3.2020
Iteration: 1568; Percent complete: 39.2%; Average loss: 3.4850
Iteration: 1569; Percent complete: 39.2%; Average loss: 3.3254
Iteration: 1570; Percent complete: 39.2%; Average loss: 3.1998
Iteration: 1571; Percent complete: 39.3%; Average loss: 3.3727
Iteration: 1572; Percent complete: 39.3%; Average loss: 3.1191
Iteration: 1573; Percent complete: 39.3%; Average loss: 3.2796
Iteration: 1574; Percent complete: 39.4%; Average loss: 3.3314
Iteration: 1575; Percent complete: 39.4%; Average loss: 3.4566
Iteration: 1576; Percent complete: 39.4%; Average loss: 3.2117
Iteration: 1577; Percent complete: 39.4%; Average loss: 3.2715
Iteration: 1578; Percent complete: 39.5%; Average loss: 3.3615
Iteration: 1579; Percent complete: 39.5%; Average loss: 3.0366
Iteration: 1580; Percent complete: 39.5%; Average loss: 3.4252
Iteration: 1581; Percent complete: 39.5%; Average loss: 3.3118
Iteration: 1582; Percent complete: 39.6%; Average loss: 3.2058
Iteration: 1583; Percent complete: 39.6%; Average loss: 3.2169
Iteration: 1584; Percent complete: 39.6%; Average loss: 3.1592
Iteration: 1585; Percent complete: 39.6%; Average loss: 3.2314
Iteration: 1586; Percent complete: 39.6%; Average loss: 3.5111
Iteration: 1587; Percent complete: 39.7%; Average loss: 3.2509
Iteration: 1588; Percent complete: 39.7%; Average loss: 3.2112
Iteration: 1589; Percent complete: 39.7%; Average loss: 3.5176
Iteration: 1590; Percent complete: 39.8%; Average loss: 3.0441
Iteration: 1591; Percent complete: 39.8%; Average loss: 3.1356
Iteration: 1592; Percent complete: 39.8%; Average loss: 3.2152
Iteration: 1593; Percent complete: 39.8%; Average loss: 3.2790
Iteration: 1594; Percent complete: 39.9%; Average loss: 3.4627
Iteration: 1595; Percent complete: 39.9%; Average loss: 3.2328
Iteration: 1596; Percent complete: 39.9%; Average loss: 3.4703
Iteration: 1597; Percent complete: 39.9%; Average loss: 3.3940
Iteration: 1598; Percent complete: 40.0%; Average loss: 3.4134
Iteration: 1599; Percent complete: 40.0%; Average loss: 3.4210
Iteration: 1600; Percent complete: 40.0%; Average loss: 3.2858
Iteration: 1601; Percent complete: 40.0%; Average loss: 3.3529
Iteration: 1602; Percent complete: 40.1%; Average loss: 3.3437
Iteration: 1603; Percent complete: 40.1%; Average loss: 3.1728
Iteration: 1604; Percent complete: 40.1%; Average loss: 3.6594
Iteration: 1605; Percent complete: 40.1%; Average loss: 3.2034
Iteration: 1606; Percent complete: 40.2%; Average loss: 3.2605
Iteration: 1607; Percent complete: 40.2%; Average loss: 3.2755
Iteration: 1608; Percent complete: 40.2%; Average loss: 3.4002
Iteration: 1609; Percent complete: 40.2%; Average loss: 3.5063
Iteration: 1610; Percent complete: 40.2%; Average loss: 3.2840
Iteration: 1611; Percent complete: 40.3%; Average loss: 3.3572
Iteration: 1612; Percent complete: 40.3%; Average loss: 3.3341
Iteration: 1613; Percent complete: 40.3%; Average loss: 3.1464
Iteration: 1614; Percent complete: 40.4%; Average loss: 3.2114
Iteration: 1615; Percent complete: 40.4%; Average loss: 3.2682
Iteration: 1616; Percent complete: 40.4%; Average loss: 3.3124
Iteration: 1617; Percent complete: 40.4%; Average loss: 3.0795
Iteration: 1618; Percent complete: 40.5%; Average loss: 3.2584
Iteration: 1619; Percent complete: 40.5%; Average loss: 3.3012
Iteration: 1620; Percent complete: 40.5%; Average loss: 3.1099
Iteration: 1621; Percent complete: 40.5%; Average loss: 3.2831
Iteration: 1622; Percent complete: 40.6%; Average loss: 3.2126
Iteration: 1623; Percent complete: 40.6%; Average loss: 3.1574
Iteration: 1624; Percent complete: 40.6%; Average loss: 3.4727
Iteration: 1625; Percent complete: 40.6%; Average loss: 3.3289
Iteration: 1626; Percent complete: 40.6%; Average loss: 3.3838
Iteration: 1627; Percent complete: 40.7%; Average loss: 3.2978
Iteration: 1628; Percent complete: 40.7%; Average loss: 2.9229
Iteration: 1629; Percent complete: 40.7%; Average loss: 3.0224
Iteration: 1630; Percent complete: 40.8%; Average loss: 3.3433
Iteration: 1631; Percent complete: 40.8%; Average loss: 3.5273
Iteration: 1632; Percent complete: 40.8%; Average loss: 3.2150
Iteration: 1633; Percent complete: 40.8%; Average loss: 3.3196
Iteration: 1634; Percent complete: 40.8%; Average loss: 3.1367
Iteration: 1635; Percent complete: 40.9%; Average loss: 3.2337
Iteration: 1636; Percent complete: 40.9%; Average loss: 3.3661
Iteration: 1637; Percent complete: 40.9%; Average loss: 3.2003
Iteration: 1638; Percent complete: 40.9%; Average loss: 3.3305
Iteration: 1639; Percent complete: 41.0%; Average loss: 3.2228
Iteration: 1640; Percent complete: 41.0%; Average loss: 3.1453
Iteration: 1641; Percent complete: 41.0%; Average loss: 3.3261
Iteration: 1642; Percent complete: 41.0%; Average loss: 3.1648
Iteration: 1643; Percent complete: 41.1%; Average loss: 3.2372
Iteration: 1644; Percent complete: 41.1%; Average loss: 3.3882
Iteration: 1645; Percent complete: 41.1%; Average loss: 3.5143
Iteration: 1646; Percent complete: 41.1%; Average loss: 3.3229
Iteration: 1647; Percent complete: 41.2%; Average loss: 3.5207
Iteration: 1648; Percent complete: 41.2%; Average loss: 3.2282
Iteration: 1649; Percent complete: 41.2%; Average loss: 3.4511
Iteration: 1650; Percent complete: 41.2%; Average loss: 3.3077
Iteration: 1651; Percent complete: 41.3%; Average loss: 3.3695
Iteration: 1652; Percent complete: 41.3%; Average loss: 3.4318
Iteration: 1653; Percent complete: 41.3%; Average loss: 3.7112
Iteration: 1654; Percent complete: 41.3%; Average loss: 3.2062
Iteration: 1655; Percent complete: 41.4%; Average loss: 3.1634
Iteration: 1656; Percent complete: 41.4%; Average loss: 3.2670
Iteration: 1657; Percent complete: 41.4%; Average loss: 2.9269
Iteration: 1658; Percent complete: 41.4%; Average loss: 3.2974
Iteration: 1659; Percent complete: 41.5%; Average loss: 3.1334
Iteration: 1660; Percent complete: 41.5%; Average loss: 3.2744
Iteration: 1661; Percent complete: 41.5%; Average loss: 3.3334
Iteration: 1662; Percent complete: 41.5%; Average loss: 3.3004
Iteration: 1663; Percent complete: 41.6%; Average loss: 3.3324
Iteration: 1664; Percent complete: 41.6%; Average loss: 3.1422
Iteration: 1665; Percent complete: 41.6%; Average loss: 3.3417
Iteration: 1666; Percent complete: 41.6%; Average loss: 3.2497
Iteration: 1667; Percent complete: 41.7%; Average loss: 3.0720
Iteration: 1668; Percent complete: 41.7%; Average loss: 3.1251
Iteration: 1669; Percent complete: 41.7%; Average loss: 3.1953
Iteration: 1670; Percent complete: 41.8%; Average loss: 3.1753
Iteration: 1671; Percent complete: 41.8%; Average loss: 3.1224
Iteration: 1672; Percent complete: 41.8%; Average loss: 3.2384
Iteration: 1673; Percent complete: 41.8%; Average loss: 3.1326
Iteration: 1674; Percent complete: 41.9%; Average loss: 3.2197
Iteration: 1675; Percent complete: 41.9%; Average loss: 3.1598
Iteration: 1676; Percent complete: 41.9%; Average loss: 3.2753
Iteration: 1677; Percent complete: 41.9%; Average loss: 3.1873
Iteration: 1678; Percent complete: 41.9%; Average loss: 3.4351
Iteration: 1679; Percent complete: 42.0%; Average loss: 3.4469
Iteration: 1680; Percent complete: 42.0%; Average loss: 3.1618
Iteration: 1681; Percent complete: 42.0%; Average loss: 3.2725
Iteration: 1682; Percent complete: 42.0%; Average loss: 3.0607
Iteration: 1683; Percent complete: 42.1%; Average loss: 3.1297
Iteration: 1684; Percent complete: 42.1%; Average loss: 3.2453
Iteration: 1685; Percent complete: 42.1%; Average loss: 3.3741
Iteration: 1686; Percent complete: 42.1%; Average loss: 3.1540
Iteration: 1687; Percent complete: 42.2%; Average loss: 3.0720
Iteration: 1688; Percent complete: 42.2%; Average loss: 3.1082
Iteration: 1689; Percent complete: 42.2%; Average loss: 3.3288
Iteration: 1690; Percent complete: 42.2%; Average loss: 3.2524
Iteration: 1691; Percent complete: 42.3%; Average loss: 3.2329
Iteration: 1692; Percent complete: 42.3%; Average loss: 3.4397
Iteration: 1693; Percent complete: 42.3%; Average loss: 3.1466
Iteration: 1694; Percent complete: 42.4%; Average loss: 3.4924
Iteration: 1695; Percent complete: 42.4%; Average loss: 3.2455
Iteration: 1696; Percent complete: 42.4%; Average loss: 3.3228
Iteration: 1697; Percent complete: 42.4%; Average loss: 3.2764
Iteration: 1698; Percent complete: 42.4%; Average loss: 3.2447
Iteration: 1699; Percent complete: 42.5%; Average loss: 3.2676
Iteration: 1700; Percent complete: 42.5%; Average loss: 3.1526
Iteration: 1701; Percent complete: 42.5%; Average loss: 3.0106
Iteration: 1702; Percent complete: 42.5%; Average loss: 3.1920
Iteration: 1703; Percent complete: 42.6%; Average loss: 3.3096
Iteration: 1704; Percent complete: 42.6%; Average loss: 3.4570
Iteration: 1705; Percent complete: 42.6%; Average loss: 3.1133
Iteration: 1706; Percent complete: 42.6%; Average loss: 3.2881
Iteration: 1707; Percent complete: 42.7%; Average loss: 2.9595
Iteration: 1708; Percent complete: 42.7%; Average loss: 3.3304
Iteration: 1709; Percent complete: 42.7%; Average loss: 3.1888
Iteration: 1710; Percent complete: 42.8%; Average loss: 3.2009
Iteration: 1711; Percent complete: 42.8%; Average loss: 3.0726
Iteration: 1712; Percent complete: 42.8%; Average loss: 3.4864
Iteration: 1713; Percent complete: 42.8%; Average loss: 3.1090
Iteration: 1714; Percent complete: 42.9%; Average loss: 3.3349
Iteration: 1715; Percent complete: 42.9%; Average loss: 3.5123
Iteration: 1716; Percent complete: 42.9%; Average loss: 3.4016
Iteration: 1717; Percent complete: 42.9%; Average loss: 3.5093
Iteration: 1718; Percent complete: 43.0%; Average loss: 3.1627
Iteration: 1719; Percent complete: 43.0%; Average loss: 3.2676
Iteration: 1720; Percent complete: 43.0%; Average loss: 3.2252
Iteration: 1721; Percent complete: 43.0%; Average loss: 3.2998
Iteration: 1722; Percent complete: 43.0%; Average loss: 3.1355
Iteration: 1723; Percent complete: 43.1%; Average loss: 3.4736
Iteration: 1724; Percent complete: 43.1%; Average loss: 3.1169
Iteration: 1725; Percent complete: 43.1%; Average loss: 3.0700
Iteration: 1726; Percent complete: 43.1%; Average loss: 3.0896
Iteration: 1727; Percent complete: 43.2%; Average loss: 2.8464
Iteration: 1728; Percent complete: 43.2%; Average loss: 3.4563
Iteration: 1729; Percent complete: 43.2%; Average loss: 3.1331
Iteration: 1730; Percent complete: 43.2%; Average loss: 3.4886
Iteration: 1731; Percent complete: 43.3%; Average loss: 3.3103
Iteration: 1732; Percent complete: 43.3%; Average loss: 3.2206
Iteration: 1733; Percent complete: 43.3%; Average loss: 3.3427
Iteration: 1734; Percent complete: 43.4%; Average loss: 3.4021
Iteration: 1735; Percent complete: 43.4%; Average loss: 3.1175
Iteration: 1736; Percent complete: 43.4%; Average loss: 3.4016
Iteration: 1737; Percent complete: 43.4%; Average loss: 3.2915
Iteration: 1738; Percent complete: 43.5%; Average loss: 3.1412
Iteration: 1739; Percent complete: 43.5%; Average loss: 3.0803
Iteration: 1740; Percent complete: 43.5%; Average loss: 3.0232
Iteration: 1741; Percent complete: 43.5%; Average loss: 3.3669
Iteration: 1742; Percent complete: 43.5%; Average loss: 3.6156
Iteration: 1743; Percent complete: 43.6%; Average loss: 3.3993
Iteration: 1744; Percent complete: 43.6%; Average loss: 3.2856
Iteration: 1745; Percent complete: 43.6%; Average loss: 3.5019
Iteration: 1746; Percent complete: 43.6%; Average loss: 3.2940
Iteration: 1747; Percent complete: 43.7%; Average loss: 3.4363
Iteration: 1748; Percent complete: 43.7%; Average loss: 3.3309
Iteration: 1749; Percent complete: 43.7%; Average loss: 3.2458
Iteration: 1750; Percent complete: 43.8%; Average loss: 3.2665
Iteration: 1751; Percent complete: 43.8%; Average loss: 3.3725
Iteration: 1752; Percent complete: 43.8%; Average loss: 3.1749
Iteration: 1753; Percent complete: 43.8%; Average loss: 3.0850
Iteration: 1754; Percent complete: 43.9%; Average loss: 3.2595
Iteration: 1755; Percent complete: 43.9%; Average loss: 3.4096
Iteration: 1756; Percent complete: 43.9%; Average loss: 3.3189
Iteration: 1757; Percent complete: 43.9%; Average loss: 3.1791
Iteration: 1758; Percent complete: 44.0%; Average loss: 3.3021
Iteration: 1759; Percent complete: 44.0%; Average loss: 3.2121
Iteration: 1760; Percent complete: 44.0%; Average loss: 3.1236
Iteration: 1761; Percent complete: 44.0%; Average loss: 3.2757
Iteration: 1762; Percent complete: 44.0%; Average loss: 3.2325
Iteration: 1763; Percent complete: 44.1%; Average loss: 3.1486
Iteration: 1764; Percent complete: 44.1%; Average loss: 3.1543
Iteration: 1765; Percent complete: 44.1%; Average loss: 3.1877
Iteration: 1766; Percent complete: 44.1%; Average loss: 2.9106
Iteration: 1767; Percent complete: 44.2%; Average loss: 3.3932
Iteration: 1768; Percent complete: 44.2%; Average loss: 3.1842
Iteration: 1769; Percent complete: 44.2%; Average loss: 3.2546
Iteration: 1770; Percent complete: 44.2%; Average loss: 3.1598
Iteration: 1771; Percent complete: 44.3%; Average loss: 3.2580
Iteration: 1772; Percent complete: 44.3%; Average loss: 3.1934
Iteration: 1773; Percent complete: 44.3%; Average loss: 3.1643
Iteration: 1774; Percent complete: 44.4%; Average loss: 3.1505
Iteration: 1775; Percent complete: 44.4%; Average loss: 3.3271
Iteration: 1776; Percent complete: 44.4%; Average loss: 2.9122
Iteration: 1777; Percent complete: 44.4%; Average loss: 3.1679
Iteration: 1778; Percent complete: 44.5%; Average loss: 3.1385
Iteration: 1779; Percent complete: 44.5%; Average loss: 3.0925
Iteration: 1780; Percent complete: 44.5%; Average loss: 3.3921
Iteration: 1781; Percent complete: 44.5%; Average loss: 3.2587
Iteration: 1782; Percent complete: 44.5%; Average loss: 3.3624
Iteration: 1783; Percent complete: 44.6%; Average loss: 3.2956
Iteration: 1784; Percent complete: 44.6%; Average loss: 3.0756
Iteration: 1785; Percent complete: 44.6%; Average loss: 3.4774
Iteration: 1786; Percent complete: 44.6%; Average loss: 3.4496
Iteration: 1787; Percent complete: 44.7%; Average loss: 3.0745
Iteration: 1788; Percent complete: 44.7%; Average loss: 3.5233
Iteration: 1789; Percent complete: 44.7%; Average loss: 3.3197
Iteration: 1790; Percent complete: 44.8%; Average loss: 3.2173
Iteration: 1791; Percent complete: 44.8%; Average loss: 3.2921
Iteration: 1792; Percent complete: 44.8%; Average loss: 3.2014
Iteration: 1793; Percent complete: 44.8%; Average loss: 3.1566
Iteration: 1794; Percent complete: 44.9%; Average loss: 3.0093
Iteration: 1795; Percent complete: 44.9%; Average loss: 3.3228
Iteration: 1796; Percent complete: 44.9%; Average loss: 3.2435
Iteration: 1797; Percent complete: 44.9%; Average loss: 3.4900
Iteration: 1798; Percent complete: 45.0%; Average loss: 3.4283
Iteration: 1799; Percent complete: 45.0%; Average loss: 3.2794
Iteration: 1800; Percent complete: 45.0%; Average loss: 3.1328
Iteration: 1801; Percent complete: 45.0%; Average loss: 3.2034
Iteration: 1802; Percent complete: 45.1%; Average loss: 3.1240
Iteration: 1803; Percent complete: 45.1%; Average loss: 3.1768
Iteration: 1804; Percent complete: 45.1%; Average loss: 3.3159
Iteration: 1805; Percent complete: 45.1%; Average loss: 3.3554
Iteration: 1806; Percent complete: 45.1%; Average loss: 3.0405
Iteration: 1807; Percent complete: 45.2%; Average loss: 3.2590
Iteration: 1808; Percent complete: 45.2%; Average loss: 3.3302
Iteration: 1809; Percent complete: 45.2%; Average loss: 3.3214
Iteration: 1810; Percent complete: 45.2%; Average loss: 3.2606
Iteration: 1811; Percent complete: 45.3%; Average loss: 3.2378
Iteration: 1812; Percent complete: 45.3%; Average loss: 3.2192
Iteration: 1813; Percent complete: 45.3%; Average loss: 3.0614
Iteration: 1814; Percent complete: 45.4%; Average loss: 3.1923
Iteration: 1815; Percent complete: 45.4%; Average loss: 3.1520
Iteration: 1816; Percent complete: 45.4%; Average loss: 3.0875
Iteration: 1817; Percent complete: 45.4%; Average loss: 3.2898
Iteration: 1818; Percent complete: 45.5%; Average loss: 3.3896
Iteration: 1819; Percent complete: 45.5%; Average loss: 3.4528
Iteration: 1820; Percent complete: 45.5%; Average loss: 3.2061
Iteration: 1821; Percent complete: 45.5%; Average loss: 3.0762
Iteration: 1822; Percent complete: 45.6%; Average loss: 3.1014
Iteration: 1823; Percent complete: 45.6%; Average loss: 3.1459
Iteration: 1824; Percent complete: 45.6%; Average loss: 3.2647
Iteration: 1825; Percent complete: 45.6%; Average loss: 3.4506
Iteration: 1826; Percent complete: 45.6%; Average loss: 3.1490
Iteration: 1827; Percent complete: 45.7%; Average loss: 3.3050
Iteration: 1828; Percent complete: 45.7%; Average loss: 3.1591
Iteration: 1829; Percent complete: 45.7%; Average loss: 3.1451
Iteration: 1830; Percent complete: 45.8%; Average loss: 3.1231
Iteration: 1831; Percent complete: 45.8%; Average loss: 3.0660
Iteration: 1832; Percent complete: 45.8%; Average loss: 3.2333
Iteration: 1833; Percent complete: 45.8%; Average loss: 3.1580
Iteration: 1834; Percent complete: 45.9%; Average loss: 3.1413
Iteration: 1835; Percent complete: 45.9%; Average loss: 3.1729
Iteration: 1836; Percent complete: 45.9%; Average loss: 3.2260
Iteration: 1837; Percent complete: 45.9%; Average loss: 3.4654
Iteration: 1838; Percent complete: 46.0%; Average loss: 3.3201
Iteration: 1839; Percent complete: 46.0%; Average loss: 3.2876
Iteration: 1840; Percent complete: 46.0%; Average loss: 3.3289
Iteration: 1841; Percent complete: 46.0%; Average loss: 2.9478
Iteration: 1842; Percent complete: 46.1%; Average loss: 2.8855
Iteration: 1843; Percent complete: 46.1%; Average loss: 3.4175
Iteration: 1844; Percent complete: 46.1%; Average loss: 3.2247
Iteration: 1845; Percent complete: 46.1%; Average loss: 3.4679
Iteration: 1846; Percent complete: 46.2%; Average loss: 3.2071
Iteration: 1847; Percent complete: 46.2%; Average loss: 3.2465
Iteration: 1848; Percent complete: 46.2%; Average loss: 3.2541
Iteration: 1849; Percent complete: 46.2%; Average loss: 3.1801
Iteration: 1850; Percent complete: 46.2%; Average loss: 3.0938
Iteration: 1851; Percent complete: 46.3%; Average loss: 3.1862
Iteration: 1852; Percent complete: 46.3%; Average loss: 3.1788
Iteration: 1853; Percent complete: 46.3%; Average loss: 3.0383
Iteration: 1854; Percent complete: 46.4%; Average loss: 2.8825
Iteration: 1855; Percent complete: 46.4%; Average loss: 3.1820
Iteration: 1856; Percent complete: 46.4%; Average loss: 3.4388
Iteration: 1857; Percent complete: 46.4%; Average loss: 3.4253
Iteration: 1858; Percent complete: 46.5%; Average loss: 3.1810
Iteration: 1859; Percent complete: 46.5%; Average loss: 3.0450
Iteration: 1860; Percent complete: 46.5%; Average loss: 3.3560
Iteration: 1861; Percent complete: 46.5%; Average loss: 3.3447
Iteration: 1862; Percent complete: 46.6%; Average loss: 3.4300
Iteration: 1863; Percent complete: 46.6%; Average loss: 3.5228
Iteration: 1864; Percent complete: 46.6%; Average loss: 3.2196
Iteration: 1865; Percent complete: 46.6%; Average loss: 3.1984
Iteration: 1866; Percent complete: 46.7%; Average loss: 3.2603
Iteration: 1867; Percent complete: 46.7%; Average loss: 2.9594
Iteration: 1868; Percent complete: 46.7%; Average loss: 3.1446
Iteration: 1869; Percent complete: 46.7%; Average loss: 3.3582
Iteration: 1870; Percent complete: 46.8%; Average loss: 3.1875
Iteration: 1871; Percent complete: 46.8%; Average loss: 3.4413
Iteration: 1872; Percent complete: 46.8%; Average loss: 3.2458
Iteration: 1873; Percent complete: 46.8%; Average loss: 3.1889
Iteration: 1874; Percent complete: 46.9%; Average loss: 3.1934
Iteration: 1875; Percent complete: 46.9%; Average loss: 2.9535
Iteration: 1876; Percent complete: 46.9%; Average loss: 3.4444
Iteration: 1877; Percent complete: 46.9%; Average loss: 3.3142
Iteration: 1878; Percent complete: 46.9%; Average loss: 3.1094
Iteration: 1879; Percent complete: 47.0%; Average loss: 3.2296
Iteration: 1880; Percent complete: 47.0%; Average loss: 3.3522
Iteration: 1881; Percent complete: 47.0%; Average loss: 2.9553
Iteration: 1882; Percent complete: 47.0%; Average loss: 3.3747
Iteration: 1883; Percent complete: 47.1%; Average loss: 3.1812
Iteration: 1884; Percent complete: 47.1%; Average loss: 2.9316
Iteration: 1885; Percent complete: 47.1%; Average loss: 2.9469
Iteration: 1886; Percent complete: 47.1%; Average loss: 3.1161
Iteration: 1887; Percent complete: 47.2%; Average loss: 3.1150
Iteration: 1888; Percent complete: 47.2%; Average loss: 3.2381
Iteration: 1889; Percent complete: 47.2%; Average loss: 3.1847
Iteration: 1890; Percent complete: 47.2%; Average loss: 3.2538
Iteration: 1891; Percent complete: 47.3%; Average loss: 3.3907
Iteration: 1892; Percent complete: 47.3%; Average loss: 3.1109
Iteration: 1893; Percent complete: 47.3%; Average loss: 3.2107
Iteration: 1894; Percent complete: 47.3%; Average loss: 3.3136
Iteration: 1895; Percent complete: 47.4%; Average loss: 3.0580
Iteration: 1896; Percent complete: 47.4%; Average loss: 3.1721
Iteration: 1897; Percent complete: 47.4%; Average loss: 3.1322
Iteration: 1898; Percent complete: 47.4%; Average loss: 2.7282
Iteration: 1899; Percent complete: 47.5%; Average loss: 3.2818
Iteration: 1900; Percent complete: 47.5%; Average loss: 3.1860
Iteration: 1901; Percent complete: 47.5%; Average loss: 3.0784
Iteration: 1902; Percent complete: 47.5%; Average loss: 3.0873
Iteration: 1903; Percent complete: 47.6%; Average loss: 3.1192
Iteration: 1904; Percent complete: 47.6%; Average loss: 3.2456
Iteration: 1905; Percent complete: 47.6%; Average loss: 3.0352
Iteration: 1906; Percent complete: 47.6%; Average loss: 2.8477
Iteration: 1907; Percent complete: 47.7%; Average loss: 3.3774
Iteration: 1908; Percent complete: 47.7%; Average loss: 3.3625
Iteration: 1909; Percent complete: 47.7%; Average loss: 3.3343
Iteration: 1910; Percent complete: 47.8%; Average loss: 3.0614
Iteration: 1911; Percent complete: 47.8%; Average loss: 3.1334
Iteration: 1912; Percent complete: 47.8%; Average loss: 3.2090
Iteration: 1913; Percent complete: 47.8%; Average loss: 3.3747
Iteration: 1914; Percent complete: 47.9%; Average loss: 3.4437
Iteration: 1915; Percent complete: 47.9%; Average loss: 3.2166
Iteration: 1916; Percent complete: 47.9%; Average loss: 3.0970
Iteration: 1917; Percent complete: 47.9%; Average loss: 3.1107
Iteration: 1918; Percent complete: 47.9%; Average loss: 3.1989
Iteration: 1919; Percent complete: 48.0%; Average loss: 3.0064
Iteration: 1920; Percent complete: 48.0%; Average loss: 3.3178
Iteration: 1921; Percent complete: 48.0%; Average loss: 3.0296
Iteration: 1922; Percent complete: 48.0%; Average loss: 3.2588
Iteration: 1923; Percent complete: 48.1%; Average loss: 3.2253
Iteration: 1924; Percent complete: 48.1%; Average loss: 3.1312
Iteration: 1925; Percent complete: 48.1%; Average loss: 3.4196
Iteration: 1926; Percent complete: 48.1%; Average loss: 3.2525
Iteration: 1927; Percent complete: 48.2%; Average loss: 3.4103
Iteration: 1928; Percent complete: 48.2%; Average loss: 3.2850
Iteration: 1929; Percent complete: 48.2%; Average loss: 3.3931
Iteration: 1930; Percent complete: 48.2%; Average loss: 3.2645
Iteration: 1931; Percent complete: 48.3%; Average loss: 3.0684
Iteration: 1932; Percent complete: 48.3%; Average loss: 3.1625
Iteration: 1933; Percent complete: 48.3%; Average loss: 3.4482
Iteration: 1934; Percent complete: 48.4%; Average loss: 3.1261
Iteration: 1935; Percent complete: 48.4%; Average loss: 3.2619
Iteration: 1936; Percent complete: 48.4%; Average loss: 3.2172
Iteration: 1937; Percent complete: 48.4%; Average loss: 3.2893
Iteration: 1938; Percent complete: 48.4%; Average loss: 2.9538
Iteration: 1939; Percent complete: 48.5%; Average loss: 3.5536
Iteration: 1940; Percent complete: 48.5%; Average loss: 3.1270
Iteration: 1941; Percent complete: 48.5%; Average loss: 3.1290
Iteration: 1942; Percent complete: 48.5%; Average loss: 2.9938
Iteration: 1943; Percent complete: 48.6%; Average loss: 3.2109
Iteration: 1944; Percent complete: 48.6%; Average loss: 3.2417
Iteration: 1945; Percent complete: 48.6%; Average loss: 3.5277
Iteration: 1946; Percent complete: 48.6%; Average loss: 2.9629
Iteration: 1947; Percent complete: 48.7%; Average loss: 3.3614
Iteration: 1948; Percent complete: 48.7%; Average loss: 2.9791
Iteration: 1949; Percent complete: 48.7%; Average loss: 3.1712
Iteration: 1950; Percent complete: 48.8%; Average loss: 3.0196
Iteration: 1951; Percent complete: 48.8%; Average loss: 3.3614
Iteration: 1952; Percent complete: 48.8%; Average loss: 3.0921
Iteration: 1953; Percent complete: 48.8%; Average loss: 3.3149
Iteration: 1954; Percent complete: 48.9%; Average loss: 3.1038
Iteration: 1955; Percent complete: 48.9%; Average loss: 3.0557
Iteration: 1956; Percent complete: 48.9%; Average loss: 3.2041
Iteration: 1957; Percent complete: 48.9%; Average loss: 3.3014
Iteration: 1958; Percent complete: 48.9%; Average loss: 2.8707
Iteration: 1959; Percent complete: 49.0%; Average loss: 3.2549
Iteration: 1960; Percent complete: 49.0%; Average loss: 2.9461
Iteration: 1961; Percent complete: 49.0%; Average loss: 3.2241
Iteration: 1962; Percent complete: 49.0%; Average loss: 3.2206
Iteration: 1963; Percent complete: 49.1%; Average loss: 2.8906
Iteration: 1964; Percent complete: 49.1%; Average loss: 3.2065
Iteration: 1965; Percent complete: 49.1%; Average loss: 3.1791
Iteration: 1966; Percent complete: 49.1%; Average loss: 3.0190
Iteration: 1967; Percent complete: 49.2%; Average loss: 3.2159
Iteration: 1968; Percent complete: 49.2%; Average loss: 3.2074
Iteration: 1969; Percent complete: 49.2%; Average loss: 3.3530
Iteration: 1970; Percent complete: 49.2%; Average loss: 3.5659
Iteration: 1971; Percent complete: 49.3%; Average loss: 3.3463
Iteration: 1972; Percent complete: 49.3%; Average loss: 3.2650
Iteration: 1973; Percent complete: 49.3%; Average loss: 2.9250
Iteration: 1974; Percent complete: 49.4%; Average loss: 3.0987
Iteration: 1975; Percent complete: 49.4%; Average loss: 3.0753
Iteration: 1976; Percent complete: 49.4%; Average loss: 3.2692
Iteration: 1977; Percent complete: 49.4%; Average loss: 3.3174
Iteration: 1978; Percent complete: 49.5%; Average loss: 3.2928
Iteration: 1979; Percent complete: 49.5%; Average loss: 3.0910
Iteration: 1980; Percent complete: 49.5%; Average loss: 3.1591
Iteration: 1981; Percent complete: 49.5%; Average loss: 3.1025
Iteration: 1982; Percent complete: 49.5%; Average loss: 2.9712
Iteration: 1983; Percent complete: 49.6%; Average loss: 3.1833
Iteration: 1984; Percent complete: 49.6%; Average loss: 3.1653
Iteration: 1985; Percent complete: 49.6%; Average loss: 2.8869
Iteration: 1986; Percent complete: 49.6%; Average loss: 3.0812
Iteration: 1987; Percent complete: 49.7%; Average loss: 3.1252
Iteration: 1988; Percent complete: 49.7%; Average loss: 3.2517
Iteration: 1989; Percent complete: 49.7%; Average loss: 3.0692
Iteration: 1990; Percent complete: 49.8%; Average loss: 2.9139
Iteration: 1991; Percent complete: 49.8%; Average loss: 3.1146
Iteration: 1992; Percent complete: 49.8%; Average loss: 3.2778
Iteration: 1993; Percent complete: 49.8%; Average loss: 3.1144
Iteration: 1994; Percent complete: 49.9%; Average loss: 3.2938
Iteration: 1995; Percent complete: 49.9%; Average loss: 3.2424
Iteration: 1996; Percent complete: 49.9%; Average loss: 2.9868
Iteration: 1997; Percent complete: 49.9%; Average loss: 3.0940
Iteration: 1998; Percent complete: 50.0%; Average loss: 3.3441
Iteration: 1999; Percent complete: 50.0%; Average loss: 2.9957
Iteration: 2000; Percent complete: 50.0%; Average loss: 3.0504
Iteration: 2001; Percent complete: 50.0%; Average loss: 3.1320
Iteration: 2002; Percent complete: 50.0%; Average loss: 3.1662
Iteration: 2003; Percent complete: 50.1%; Average loss: 3.1104
Iteration: 2004; Percent complete: 50.1%; Average loss: 3.2920
Iteration: 2005; Percent complete: 50.1%; Average loss: 3.5191
Iteration: 2006; Percent complete: 50.1%; Average loss: 3.1749
Iteration: 2007; Percent complete: 50.2%; Average loss: 3.2354
Iteration: 2008; Percent complete: 50.2%; Average loss: 3.0086
Iteration: 2009; Percent complete: 50.2%; Average loss: 3.1677
Iteration: 2010; Percent complete: 50.2%; Average loss: 3.1241
Iteration: 2011; Percent complete: 50.3%; Average loss: 3.1228
Iteration: 2012; Percent complete: 50.3%; Average loss: 3.1716
Iteration: 2013; Percent complete: 50.3%; Average loss: 3.2627
Iteration: 2014; Percent complete: 50.3%; Average loss: 3.1004
Iteration: 2015; Percent complete: 50.4%; Average loss: 3.3309
Iteration: 2016; Percent complete: 50.4%; Average loss: 3.0131
Iteration: 2017; Percent complete: 50.4%; Average loss: 3.0365
Iteration: 2018; Percent complete: 50.4%; Average loss: 3.3958
Iteration: 2019; Percent complete: 50.5%; Average loss: 3.1087
Iteration: 2020; Percent complete: 50.5%; Average loss: 3.3087
Iteration: 2021; Percent complete: 50.5%; Average loss: 3.3139
Iteration: 2022; Percent complete: 50.5%; Average loss: 3.1053
Iteration: 2023; Percent complete: 50.6%; Average loss: 2.8854
Iteration: 2024; Percent complete: 50.6%; Average loss: 3.1088
Iteration: 2025; Percent complete: 50.6%; Average loss: 3.0461
Iteration: 2026; Percent complete: 50.6%; Average loss: 3.2875
Iteration: 2027; Percent complete: 50.7%; Average loss: 3.2600
Iteration: 2028; Percent complete: 50.7%; Average loss: 3.1146
Iteration: 2029; Percent complete: 50.7%; Average loss: 2.9883
Iteration: 2030; Percent complete: 50.7%; Average loss: 3.2389
Iteration: 2031; Percent complete: 50.8%; Average loss: 3.1522
Iteration: 2032; Percent complete: 50.8%; Average loss: 3.2598
Iteration: 2033; Percent complete: 50.8%; Average loss: 2.9710
Iteration: 2034; Percent complete: 50.8%; Average loss: 3.2203
Iteration: 2035; Percent complete: 50.9%; Average loss: 3.1669
Iteration: 2036; Percent complete: 50.9%; Average loss: 3.1305
Iteration: 2037; Percent complete: 50.9%; Average loss: 3.1362
Iteration: 2038; Percent complete: 50.9%; Average loss: 3.2378
Iteration: 2039; Percent complete: 51.0%; Average loss: 3.2133
Iteration: 2040; Percent complete: 51.0%; Average loss: 3.2929
Iteration: 2041; Percent complete: 51.0%; Average loss: 3.2412
Iteration: 2042; Percent complete: 51.0%; Average loss: 2.9949
Iteration: 2043; Percent complete: 51.1%; Average loss: 3.1243
Iteration: 2044; Percent complete: 51.1%; Average loss: 3.0678
Iteration: 2045; Percent complete: 51.1%; Average loss: 3.3484
Iteration: 2046; Percent complete: 51.1%; Average loss: 3.1748
Iteration: 2047; Percent complete: 51.2%; Average loss: 3.1523
Iteration: 2048; Percent complete: 51.2%; Average loss: 3.2559
Iteration: 2049; Percent complete: 51.2%; Average loss: 3.2630
Iteration: 2050; Percent complete: 51.2%; Average loss: 3.1364
Iteration: 2051; Percent complete: 51.3%; Average loss: 3.1268
Iteration: 2052; Percent complete: 51.3%; Average loss: 3.0040
Iteration: 2053; Percent complete: 51.3%; Average loss: 3.0697
Iteration: 2054; Percent complete: 51.3%; Average loss: 3.1203
Iteration: 2055; Percent complete: 51.4%; Average loss: 3.4742
Iteration: 2056; Percent complete: 51.4%; Average loss: 3.4321
Iteration: 2057; Percent complete: 51.4%; Average loss: 3.0961
Iteration: 2058; Percent complete: 51.4%; Average loss: 3.1852
Iteration: 2059; Percent complete: 51.5%; Average loss: 3.3430
Iteration: 2060; Percent complete: 51.5%; Average loss: 3.3227
Iteration: 2061; Percent complete: 51.5%; Average loss: 3.0764
Iteration: 2062; Percent complete: 51.5%; Average loss: 3.1872
Iteration: 2063; Percent complete: 51.6%; Average loss: 3.1879
Iteration: 2064; Percent complete: 51.6%; Average loss: 2.8803
Iteration: 2065; Percent complete: 51.6%; Average loss: 3.1929
Iteration: 2066; Percent complete: 51.6%; Average loss: 3.1673
Iteration: 2067; Percent complete: 51.7%; Average loss: 3.1807
Iteration: 2068; Percent complete: 51.7%; Average loss: 2.9176
Iteration: 2069; Percent complete: 51.7%; Average loss: 3.1685
Iteration: 2070; Percent complete: 51.7%; Average loss: 3.1259
Iteration: 2071; Percent complete: 51.8%; Average loss: 3.2658
Iteration: 2072; Percent complete: 51.8%; Average loss: 3.1512
Iteration: 2073; Percent complete: 51.8%; Average loss: 3.1154
Iteration: 2074; Percent complete: 51.8%; Average loss: 3.1986
Iteration: 2075; Percent complete: 51.9%; Average loss: 3.0655
Iteration: 2076; Percent complete: 51.9%; Average loss: 3.2963
Iteration: 2077; Percent complete: 51.9%; Average loss: 3.0269
Iteration: 2078; Percent complete: 51.9%; Average loss: 3.1463
Iteration: 2079; Percent complete: 52.0%; Average loss: 3.0718
Iteration: 2080; Percent complete: 52.0%; Average loss: 3.1339
Iteration: 2081; Percent complete: 52.0%; Average loss: 3.2845
Iteration: 2082; Percent complete: 52.0%; Average loss: 2.9489
Iteration: 2083; Percent complete: 52.1%; Average loss: 3.2966
Iteration: 2084; Percent complete: 52.1%; Average loss: 3.1883
Iteration: 2085; Percent complete: 52.1%; Average loss: 2.9488
Iteration: 2086; Percent complete: 52.1%; Average loss: 3.0596
Iteration: 2087; Percent complete: 52.2%; Average loss: 3.0667
Iteration: 2088; Percent complete: 52.2%; Average loss: 3.0349
Iteration: 2089; Percent complete: 52.2%; Average loss: 3.0925
Iteration: 2090; Percent complete: 52.2%; Average loss: 3.2403
Iteration: 2091; Percent complete: 52.3%; Average loss: 3.0196
Iteration: 2092; Percent complete: 52.3%; Average loss: 3.3204
Iteration: 2093; Percent complete: 52.3%; Average loss: 2.9521
Iteration: 2094; Percent complete: 52.3%; Average loss: 2.9281
Iteration: 2095; Percent complete: 52.4%; Average loss: 3.0246
Iteration: 2096; Percent complete: 52.4%; Average loss: 3.3271
Iteration: 2097; Percent complete: 52.4%; Average loss: 3.2685
Iteration: 2098; Percent complete: 52.4%; Average loss: 3.0953
Iteration: 2099; Percent complete: 52.5%; Average loss: 3.1096
Iteration: 2100; Percent complete: 52.5%; Average loss: 2.9594
Iteration: 2101; Percent complete: 52.5%; Average loss: 3.0955
Iteration: 2102; Percent complete: 52.5%; Average loss: 2.9341
Iteration: 2103; Percent complete: 52.6%; Average loss: 3.1324
Iteration: 2104; Percent complete: 52.6%; Average loss: 3.0543
Iteration: 2105; Percent complete: 52.6%; Average loss: 3.0985
Iteration: 2106; Percent complete: 52.6%; Average loss: 3.2389
Iteration: 2107; Percent complete: 52.7%; Average loss: 3.3362
Iteration: 2108; Percent complete: 52.7%; Average loss: 3.1386
Iteration: 2109; Percent complete: 52.7%; Average loss: 3.1555
Iteration: 2110; Percent complete: 52.8%; Average loss: 3.0217
Iteration: 2111; Percent complete: 52.8%; Average loss: 2.9593
Iteration: 2112; Percent complete: 52.8%; Average loss: 3.1917
Iteration: 2113; Percent complete: 52.8%; Average loss: 3.3261
Iteration: 2114; Percent complete: 52.8%; Average loss: 3.0624
Iteration: 2115; Percent complete: 52.9%; Average loss: 3.0121
Iteration: 2116; Percent complete: 52.9%; Average loss: 3.1403
Iteration: 2117; Percent complete: 52.9%; Average loss: 3.0440
Iteration: 2118; Percent complete: 52.9%; Average loss: 3.0288
Iteration: 2119; Percent complete: 53.0%; Average loss: 3.1488
Iteration: 2120; Percent complete: 53.0%; Average loss: 3.2173
Iteration: 2121; Percent complete: 53.0%; Average loss: 3.2474
Iteration: 2122; Percent complete: 53.0%; Average loss: 3.1237
Iteration: 2123; Percent complete: 53.1%; Average loss: 3.1806
Iteration: 2124; Percent complete: 53.1%; Average loss: 3.1376
Iteration: 2125; Percent complete: 53.1%; Average loss: 3.3203
Iteration: 2126; Percent complete: 53.1%; Average loss: 2.7241
Iteration: 2127; Percent complete: 53.2%; Average loss: 3.0562
Iteration: 2128; Percent complete: 53.2%; Average loss: 3.0911
Iteration: 2129; Percent complete: 53.2%; Average loss: 3.4333
Iteration: 2130; Percent complete: 53.2%; Average loss: 3.0705
Iteration: 2131; Percent complete: 53.3%; Average loss: 3.1384
Iteration: 2132; Percent complete: 53.3%; Average loss: 3.1927
Iteration: 2133; Percent complete: 53.3%; Average loss: 3.0525
Iteration: 2134; Percent complete: 53.3%; Average loss: 3.4331
Iteration: 2135; Percent complete: 53.4%; Average loss: 2.9741
Iteration: 2136; Percent complete: 53.4%; Average loss: 3.1329
Iteration: 2137; Percent complete: 53.4%; Average loss: 3.2359
Iteration: 2138; Percent complete: 53.4%; Average loss: 3.3332
Iteration: 2139; Percent complete: 53.5%; Average loss: 3.1116
Iteration: 2140; Percent complete: 53.5%; Average loss: 2.9600
Iteration: 2141; Percent complete: 53.5%; Average loss: 3.3149
Iteration: 2142; Percent complete: 53.5%; Average loss: 3.2242
Iteration: 2143; Percent complete: 53.6%; Average loss: 3.0135
Iteration: 2144; Percent complete: 53.6%; Average loss: 3.1628
Iteration: 2145; Percent complete: 53.6%; Average loss: 2.9102
Iteration: 2146; Percent complete: 53.6%; Average loss: 3.0540
Iteration: 2147; Percent complete: 53.7%; Average loss: 3.0699
Iteration: 2148; Percent complete: 53.7%; Average loss: 3.1333
Iteration: 2149; Percent complete: 53.7%; Average loss: 3.0521
Iteration: 2150; Percent complete: 53.8%; Average loss: 3.2138
Iteration: 2151; Percent complete: 53.8%; Average loss: 3.1395
Iteration: 2152; Percent complete: 53.8%; Average loss: 2.8101
Iteration: 2153; Percent complete: 53.8%; Average loss: 2.9327
Iteration: 2154; Percent complete: 53.8%; Average loss: 3.1049
Iteration: 2155; Percent complete: 53.9%; Average loss: 2.9089
Iteration: 2156; Percent complete: 53.9%; Average loss: 3.2779
Iteration: 2157; Percent complete: 53.9%; Average loss: 2.9265
Iteration: 2158; Percent complete: 53.9%; Average loss: 3.1791
Iteration: 2159; Percent complete: 54.0%; Average loss: 3.2865
Iteration: 2160; Percent complete: 54.0%; Average loss: 3.1612
Iteration: 2161; Percent complete: 54.0%; Average loss: 3.1481
Iteration: 2162; Percent complete: 54.0%; Average loss: 3.0755
Iteration: 2163; Percent complete: 54.1%; Average loss: 2.7555
Iteration: 2164; Percent complete: 54.1%; Average loss: 3.1801
Iteration: 2165; Percent complete: 54.1%; Average loss: 2.9159
Iteration: 2166; Percent complete: 54.1%; Average loss: 3.2877
Iteration: 2167; Percent complete: 54.2%; Average loss: 3.1664
Iteration: 2168; Percent complete: 54.2%; Average loss: 3.1552
Iteration: 2169; Percent complete: 54.2%; Average loss: 2.9868
Iteration: 2170; Percent complete: 54.2%; Average loss: 3.1094
Iteration: 2171; Percent complete: 54.3%; Average loss: 3.0149
Iteration: 2172; Percent complete: 54.3%; Average loss: 3.1945
Iteration: 2173; Percent complete: 54.3%; Average loss: 3.1611
Iteration: 2174; Percent complete: 54.4%; Average loss: 3.0193
Iteration: 2175; Percent complete: 54.4%; Average loss: 3.0903
Iteration: 2176; Percent complete: 54.4%; Average loss: 3.1454
Iteration: 2177; Percent complete: 54.4%; Average loss: 3.0380
Iteration: 2178; Percent complete: 54.4%; Average loss: 2.9372
Iteration: 2179; Percent complete: 54.5%; Average loss: 3.0036
Iteration: 2180; Percent complete: 54.5%; Average loss: 2.9132
Iteration: 2181; Percent complete: 54.5%; Average loss: 3.1394
Iteration: 2182; Percent complete: 54.5%; Average loss: 3.2013
Iteration: 2183; Percent complete: 54.6%; Average loss: 3.1755
Iteration: 2184; Percent complete: 54.6%; Average loss: 3.0452
Iteration: 2185; Percent complete: 54.6%; Average loss: 3.0576
Iteration: 2186; Percent complete: 54.6%; Average loss: 3.0051
Iteration: 2187; Percent complete: 54.7%; Average loss: 3.0873
Iteration: 2188; Percent complete: 54.7%; Average loss: 3.0536
Iteration: 2189; Percent complete: 54.7%; Average loss: 3.1691
Iteration: 2190; Percent complete: 54.8%; Average loss: 3.1634
Iteration: 2191; Percent complete: 54.8%; Average loss: 3.1485
Iteration: 2192; Percent complete: 54.8%; Average loss: 3.3363
Iteration: 2193; Percent complete: 54.8%; Average loss: 2.8580
Iteration: 2194; Percent complete: 54.9%; Average loss: 3.0037
Iteration: 2195; Percent complete: 54.9%; Average loss: 3.2484
Iteration: 2196; Percent complete: 54.9%; Average loss: 3.2082
Iteration: 2197; Percent complete: 54.9%; Average loss: 3.0760
Iteration: 2198; Percent complete: 54.9%; Average loss: 3.1684
Iteration: 2199; Percent complete: 55.0%; Average loss: 3.1044
Iteration: 2200; Percent complete: 55.0%; Average loss: 2.9254
Iteration: 2201; Percent complete: 55.0%; Average loss: 3.4081
Iteration: 2202; Percent complete: 55.0%; Average loss: 2.9358
Iteration: 2203; Percent complete: 55.1%; Average loss: 3.0331
Iteration: 2204; Percent complete: 55.1%; Average loss: 3.0961
Iteration: 2205; Percent complete: 55.1%; Average loss: 3.0824
Iteration: 2206; Percent complete: 55.1%; Average loss: 3.1212
Iteration: 2207; Percent complete: 55.2%; Average loss: 2.8783
Iteration: 2208; Percent complete: 55.2%; Average loss: 3.3609
Iteration: 2209; Percent complete: 55.2%; Average loss: 3.3337
Iteration: 2210; Percent complete: 55.2%; Average loss: 3.1627
Iteration: 2211; Percent complete: 55.3%; Average loss: 3.0919
Iteration: 2212; Percent complete: 55.3%; Average loss: 2.9527
Iteration: 2213; Percent complete: 55.3%; Average loss: 3.0401
Iteration: 2214; Percent complete: 55.4%; Average loss: 2.9906
Iteration: 2215; Percent complete: 55.4%; Average loss: 2.9165
Iteration: 2216; Percent complete: 55.4%; Average loss: 2.9251
Iteration: 2217; Percent complete: 55.4%; Average loss: 3.4215
Iteration: 2218; Percent complete: 55.5%; Average loss: 3.2494
Iteration: 2219; Percent complete: 55.5%; Average loss: 2.9564
Iteration: 2220; Percent complete: 55.5%; Average loss: 3.3381
Iteration: 2221; Percent complete: 55.5%; Average loss: 3.3830
Iteration: 2222; Percent complete: 55.5%; Average loss: 3.0434
Iteration: 2223; Percent complete: 55.6%; Average loss: 3.3791
Iteration: 2224; Percent complete: 55.6%; Average loss: 3.1249
Iteration: 2225; Percent complete: 55.6%; Average loss: 3.0225
Iteration: 2226; Percent complete: 55.6%; Average loss: 3.0749
Iteration: 2227; Percent complete: 55.7%; Average loss: 3.1263
Iteration: 2228; Percent complete: 55.7%; Average loss: 3.1473
Iteration: 2229; Percent complete: 55.7%; Average loss: 3.1035
Iteration: 2230; Percent complete: 55.8%; Average loss: 3.0234
Iteration: 2231; Percent complete: 55.8%; Average loss: 2.9032
Iteration: 2232; Percent complete: 55.8%; Average loss: 3.1430
Iteration: 2233; Percent complete: 55.8%; Average loss: 2.9767
Iteration: 2234; Percent complete: 55.9%; Average loss: 3.2819
Iteration: 2235; Percent complete: 55.9%; Average loss: 3.2025
Iteration: 2236; Percent complete: 55.9%; Average loss: 3.0257
Iteration: 2237; Percent complete: 55.9%; Average loss: 3.3319
Iteration: 2238; Percent complete: 56.0%; Average loss: 3.0956
Iteration: 2239; Percent complete: 56.0%; Average loss: 3.0521
Iteration: 2240; Percent complete: 56.0%; Average loss: 3.1457
Iteration: 2241; Percent complete: 56.0%; Average loss: 3.2136
Iteration: 2242; Percent complete: 56.0%; Average loss: 3.5206
Iteration: 2243; Percent complete: 56.1%; Average loss: 2.9881
Iteration: 2244; Percent complete: 56.1%; Average loss: 3.1805
Iteration: 2245; Percent complete: 56.1%; Average loss: 3.1600
Iteration: 2246; Percent complete: 56.1%; Average loss: 3.0665
Iteration: 2247; Percent complete: 56.2%; Average loss: 3.4300
Iteration: 2248; Percent complete: 56.2%; Average loss: 3.0874
Iteration: 2249; Percent complete: 56.2%; Average loss: 3.1392
Iteration: 2250; Percent complete: 56.2%; Average loss: 2.8792
Iteration: 2251; Percent complete: 56.3%; Average loss: 3.0845
Iteration: 2252; Percent complete: 56.3%; Average loss: 3.1819
Iteration: 2253; Percent complete: 56.3%; Average loss: 3.2373
Iteration: 2254; Percent complete: 56.4%; Average loss: 2.9851
Iteration: 2255; Percent complete: 56.4%; Average loss: 3.1493
Iteration: 2256; Percent complete: 56.4%; Average loss: 2.8650
Iteration: 2257; Percent complete: 56.4%; Average loss: 2.8843
Iteration: 2258; Percent complete: 56.5%; Average loss: 3.0372
Iteration: 2259; Percent complete: 56.5%; Average loss: 2.9111
Iteration: 2260; Percent complete: 56.5%; Average loss: 2.8038
Iteration: 2261; Percent complete: 56.5%; Average loss: 3.1324
Iteration: 2262; Percent complete: 56.5%; Average loss: 3.1081
Iteration: 2263; Percent complete: 56.6%; Average loss: 3.2593
Iteration: 2264; Percent complete: 56.6%; Average loss: 3.0888
Iteration: 2265; Percent complete: 56.6%; Average loss: 3.0040
Iteration: 2266; Percent complete: 56.6%; Average loss: 3.0687
Iteration: 2267; Percent complete: 56.7%; Average loss: 2.9910
Iteration: 2268; Percent complete: 56.7%; Average loss: 2.9033
Iteration: 2269; Percent complete: 56.7%; Average loss: 2.9419
Iteration: 2270; Percent complete: 56.8%; Average loss: 2.9293
Iteration: 2271; Percent complete: 56.8%; Average loss: 3.0575
Iteration: 2272; Percent complete: 56.8%; Average loss: 2.9605
Iteration: 2273; Percent complete: 56.8%; Average loss: 3.0835
Iteration: 2274; Percent complete: 56.9%; Average loss: 3.3089
Iteration: 2275; Percent complete: 56.9%; Average loss: 3.1240
Iteration: 2276; Percent complete: 56.9%; Average loss: 2.9797
Iteration: 2277; Percent complete: 56.9%; Average loss: 2.9445
Iteration: 2278; Percent complete: 57.0%; Average loss: 3.1564
Iteration: 2279; Percent complete: 57.0%; Average loss: 2.9883
Iteration: 2280; Percent complete: 57.0%; Average loss: 3.0362
Iteration: 2281; Percent complete: 57.0%; Average loss: 2.9847
Iteration: 2282; Percent complete: 57.0%; Average loss: 3.1739
Iteration: 2283; Percent complete: 57.1%; Average loss: 2.9080
Iteration: 2284; Percent complete: 57.1%; Average loss: 3.1054
Iteration: 2285; Percent complete: 57.1%; Average loss: 3.1635
Iteration: 2286; Percent complete: 57.1%; Average loss: 3.2343
Iteration: 2287; Percent complete: 57.2%; Average loss: 3.0026
Iteration: 2288; Percent complete: 57.2%; Average loss: 3.0706
Iteration: 2289; Percent complete: 57.2%; Average loss: 3.3943
Iteration: 2290; Percent complete: 57.2%; Average loss: 3.2364
Iteration: 2291; Percent complete: 57.3%; Average loss: 3.0755
Iteration: 2292; Percent complete: 57.3%; Average loss: 3.4131
Iteration: 2293; Percent complete: 57.3%; Average loss: 3.0069
Iteration: 2294; Percent complete: 57.4%; Average loss: 3.0552
Iteration: 2295; Percent complete: 57.4%; Average loss: 2.9652
Iteration: 2296; Percent complete: 57.4%; Average loss: 2.9848
Iteration: 2297; Percent complete: 57.4%; Average loss: 3.0958
Iteration: 2298; Percent complete: 57.5%; Average loss: 3.0245
Iteration: 2299; Percent complete: 57.5%; Average loss: 3.1851
Iteration: 2300; Percent complete: 57.5%; Average loss: 2.9744
Iteration: 2301; Percent complete: 57.5%; Average loss: 2.9551
Iteration: 2302; Percent complete: 57.6%; Average loss: 3.2028
Iteration: 2303; Percent complete: 57.6%; Average loss: 3.0208
Iteration: 2304; Percent complete: 57.6%; Average loss: 3.0867
Iteration: 2305; Percent complete: 57.6%; Average loss: 3.1237
Iteration: 2306; Percent complete: 57.6%; Average loss: 2.8928
Iteration: 2307; Percent complete: 57.7%; Average loss: 2.8400
Iteration: 2308; Percent complete: 57.7%; Average loss: 3.0454
Iteration: 2309; Percent complete: 57.7%; Average loss: 3.2635
Iteration: 2310; Percent complete: 57.8%; Average loss: 3.1794
Iteration: 2311; Percent complete: 57.8%; Average loss: 2.9574
Iteration: 2312; Percent complete: 57.8%; Average loss: 3.2532
Iteration: 2313; Percent complete: 57.8%; Average loss: 3.0281
Iteration: 2314; Percent complete: 57.9%; Average loss: 3.3124
Iteration: 2315; Percent complete: 57.9%; Average loss: 3.0809
Iteration: 2316; Percent complete: 57.9%; Average loss: 3.0498
Iteration: 2317; Percent complete: 57.9%; Average loss: 3.1437
Iteration: 2318; Percent complete: 58.0%; Average loss: 3.1118
Iteration: 2319; Percent complete: 58.0%; Average loss: 3.0301
Iteration: 2320; Percent complete: 58.0%; Average loss: 3.1216
Iteration: 2321; Percent complete: 58.0%; Average loss: 3.1414
Iteration: 2322; Percent complete: 58.1%; Average loss: 3.0929
Iteration: 2323; Percent complete: 58.1%; Average loss: 2.7812
Iteration: 2324; Percent complete: 58.1%; Average loss: 2.8273
Iteration: 2325; Percent complete: 58.1%; Average loss: 2.9820
Iteration: 2326; Percent complete: 58.1%; Average loss: 3.2439
Iteration: 2327; Percent complete: 58.2%; Average loss: 2.9663
Iteration: 2328; Percent complete: 58.2%; Average loss: 2.8830
Iteration: 2329; Percent complete: 58.2%; Average loss: 2.9541
Iteration: 2330; Percent complete: 58.2%; Average loss: 3.1904
Iteration: 2331; Percent complete: 58.3%; Average loss: 3.1948
Iteration: 2332; Percent complete: 58.3%; Average loss: 2.6604
Iteration: 2333; Percent complete: 58.3%; Average loss: 3.2035
Iteration: 2334; Percent complete: 58.4%; Average loss: 3.0030
Iteration: 2335; Percent complete: 58.4%; Average loss: 2.9540
Iteration: 2336; Percent complete: 58.4%; Average loss: 3.0802
Iteration: 2337; Percent complete: 58.4%; Average loss: 2.9391
Iteration: 2338; Percent complete: 58.5%; Average loss: 3.1709
Iteration: 2339; Percent complete: 58.5%; Average loss: 3.0142
Iteration: 2340; Percent complete: 58.5%; Average loss: 3.1684
Iteration: 2341; Percent complete: 58.5%; Average loss: 3.0221
Iteration: 2342; Percent complete: 58.6%; Average loss: 3.2309
Iteration: 2343; Percent complete: 58.6%; Average loss: 2.9401
Iteration: 2344; Percent complete: 58.6%; Average loss: 2.9649
Iteration: 2345; Percent complete: 58.6%; Average loss: 3.0550
Iteration: 2346; Percent complete: 58.7%; Average loss: 3.1321
Iteration: 2347; Percent complete: 58.7%; Average loss: 2.9284
Iteration: 2348; Percent complete: 58.7%; Average loss: 3.2303
Iteration: 2349; Percent complete: 58.7%; Average loss: 2.9213
Iteration: 2350; Percent complete: 58.8%; Average loss: 3.0207
Iteration: 2351; Percent complete: 58.8%; Average loss: 3.2033
Iteration: 2352; Percent complete: 58.8%; Average loss: 2.9246
Iteration: 2353; Percent complete: 58.8%; Average loss: 3.3090
Iteration: 2354; Percent complete: 58.9%; Average loss: 3.0719
Iteration: 2355; Percent complete: 58.9%; Average loss: 3.2136
Iteration: 2356; Percent complete: 58.9%; Average loss: 3.1050
Iteration: 2357; Percent complete: 58.9%; Average loss: 3.0552
Iteration: 2358; Percent complete: 59.0%; Average loss: 3.0949
Iteration: 2359; Percent complete: 59.0%; Average loss: 2.9677
Iteration: 2360; Percent complete: 59.0%; Average loss: 2.9653
Iteration: 2361; Percent complete: 59.0%; Average loss: 3.0029
Iteration: 2362; Percent complete: 59.1%; Average loss: 3.2220
Iteration: 2363; Percent complete: 59.1%; Average loss: 3.2261
Iteration: 2364; Percent complete: 59.1%; Average loss: 3.1261
Iteration: 2365; Percent complete: 59.1%; Average loss: 2.9226
Iteration: 2366; Percent complete: 59.2%; Average loss: 3.0365
Iteration: 2367; Percent complete: 59.2%; Average loss: 2.8669
Iteration: 2368; Percent complete: 59.2%; Average loss: 2.8897
Iteration: 2369; Percent complete: 59.2%; Average loss: 2.9228
Iteration: 2370; Percent complete: 59.2%; Average loss: 2.8279
Iteration: 2371; Percent complete: 59.3%; Average loss: 3.0557
Iteration: 2372; Percent complete: 59.3%; Average loss: 2.9198
Iteration: 2373; Percent complete: 59.3%; Average loss: 2.7851
Iteration: 2374; Percent complete: 59.4%; Average loss: 3.1414
Iteration: 2375; Percent complete: 59.4%; Average loss: 3.0772
Iteration: 2376; Percent complete: 59.4%; Average loss: 3.0541
Iteration: 2377; Percent complete: 59.4%; Average loss: 3.2216
Iteration: 2378; Percent complete: 59.5%; Average loss: 3.2073
Iteration: 2379; Percent complete: 59.5%; Average loss: 3.3464
Iteration: 2380; Percent complete: 59.5%; Average loss: 2.9290
Iteration: 2381; Percent complete: 59.5%; Average loss: 2.9132
Iteration: 2382; Percent complete: 59.6%; Average loss: 3.0867
Iteration: 2383; Percent complete: 59.6%; Average loss: 3.1907
Iteration: 2384; Percent complete: 59.6%; Average loss: 2.7775
Iteration: 2385; Percent complete: 59.6%; Average loss: 3.3288
Iteration: 2386; Percent complete: 59.7%; Average loss: 3.3888
Iteration: 2387; Percent complete: 59.7%; Average loss: 2.9071
Iteration: 2388; Percent complete: 59.7%; Average loss: 3.0811
Iteration: 2389; Percent complete: 59.7%; Average loss: 2.7362
Iteration: 2390; Percent complete: 59.8%; Average loss: 3.0028
Iteration: 2391; Percent complete: 59.8%; Average loss: 3.0580
Iteration: 2392; Percent complete: 59.8%; Average loss: 3.1371
Iteration: 2393; Percent complete: 59.8%; Average loss: 3.1798
Iteration: 2394; Percent complete: 59.9%; Average loss: 3.0456
Iteration: 2395; Percent complete: 59.9%; Average loss: 3.0424
Iteration: 2396; Percent complete: 59.9%; Average loss: 3.0652
Iteration: 2397; Percent complete: 59.9%; Average loss: 3.1948
Iteration: 2398; Percent complete: 60.0%; Average loss: 3.1333
Iteration: 2399; Percent complete: 60.0%; Average loss: 3.1043
Iteration: 2400; Percent complete: 60.0%; Average loss: 3.2963
Iteration: 2401; Percent complete: 60.0%; Average loss: 2.9617
Iteration: 2402; Percent complete: 60.1%; Average loss: 3.0547
Iteration: 2403; Percent complete: 60.1%; Average loss: 3.1204
Iteration: 2404; Percent complete: 60.1%; Average loss: 3.3164
Iteration: 2405; Percent complete: 60.1%; Average loss: 2.9724
Iteration: 2406; Percent complete: 60.2%; Average loss: 3.0398
Iteration: 2407; Percent complete: 60.2%; Average loss: 3.2416
Iteration: 2408; Percent complete: 60.2%; Average loss: 2.9715
Iteration: 2409; Percent complete: 60.2%; Average loss: 3.1164
Iteration: 2410; Percent complete: 60.2%; Average loss: 3.0298
Iteration: 2411; Percent complete: 60.3%; Average loss: 2.8837
Iteration: 2412; Percent complete: 60.3%; Average loss: 3.1818
Iteration: 2413; Percent complete: 60.3%; Average loss: 2.9157
Iteration: 2414; Percent complete: 60.4%; Average loss: 3.0364
Iteration: 2415; Percent complete: 60.4%; Average loss: 3.0432
Iteration: 2416; Percent complete: 60.4%; Average loss: 2.8710
Iteration: 2417; Percent complete: 60.4%; Average loss: 3.3480
Iteration: 2418; Percent complete: 60.5%; Average loss: 3.0242
Iteration: 2419; Percent complete: 60.5%; Average loss: 3.0950
Iteration: 2420; Percent complete: 60.5%; Average loss: 3.0567
Iteration: 2421; Percent complete: 60.5%; Average loss: 3.0729
Iteration: 2422; Percent complete: 60.6%; Average loss: 3.1026
Iteration: 2423; Percent complete: 60.6%; Average loss: 3.1975
Iteration: 2424; Percent complete: 60.6%; Average loss: 2.8558
Iteration: 2425; Percent complete: 60.6%; Average loss: 3.0173
Iteration: 2426; Percent complete: 60.7%; Average loss: 2.8145
Iteration: 2427; Percent complete: 60.7%; Average loss: 2.6885
Iteration: 2428; Percent complete: 60.7%; Average loss: 2.8695
Iteration: 2429; Percent complete: 60.7%; Average loss: 2.9315
Iteration: 2430; Percent complete: 60.8%; Average loss: 3.0959
Iteration: 2431; Percent complete: 60.8%; Average loss: 2.8268
Iteration: 2432; Percent complete: 60.8%; Average loss: 2.8347
Iteration: 2433; Percent complete: 60.8%; Average loss: 3.0639
Iteration: 2434; Percent complete: 60.9%; Average loss: 2.8787
Iteration: 2435; Percent complete: 60.9%; Average loss: 3.0897
Iteration: 2436; Percent complete: 60.9%; Average loss: 3.1997
Iteration: 2437; Percent complete: 60.9%; Average loss: 3.1720
Iteration: 2438; Percent complete: 61.0%; Average loss: 3.3600
Iteration: 2439; Percent complete: 61.0%; Average loss: 2.9524
Iteration: 2440; Percent complete: 61.0%; Average loss: 3.1516
Iteration: 2441; Percent complete: 61.0%; Average loss: 3.3644
Iteration: 2442; Percent complete: 61.1%; Average loss: 3.4323
Iteration: 2443; Percent complete: 61.1%; Average loss: 2.9915
Iteration: 2444; Percent complete: 61.1%; Average loss: 2.9708
Iteration: 2445; Percent complete: 61.1%; Average loss: 2.9859
Iteration: 2446; Percent complete: 61.2%; Average loss: 2.9605
Iteration: 2447; Percent complete: 61.2%; Average loss: 3.0085
Iteration: 2448; Percent complete: 61.2%; Average loss: 3.0158
Iteration: 2449; Percent complete: 61.2%; Average loss: 3.1831
Iteration: 2450; Percent complete: 61.3%; Average loss: 2.8312
Iteration: 2451; Percent complete: 61.3%; Average loss: 2.9938
Iteration: 2452; Percent complete: 61.3%; Average loss: 3.2186
Iteration: 2453; Percent complete: 61.3%; Average loss: 2.8679
Iteration: 2454; Percent complete: 61.4%; Average loss: 2.9150
Iteration: 2455; Percent complete: 61.4%; Average loss: 2.9837
Iteration: 2456; Percent complete: 61.4%; Average loss: 3.1222
Iteration: 2457; Percent complete: 61.4%; Average loss: 3.1744
Iteration: 2458; Percent complete: 61.5%; Average loss: 3.0844
Iteration: 2459; Percent complete: 61.5%; Average loss: 2.8316
Iteration: 2460; Percent complete: 61.5%; Average loss: 3.2046
Iteration: 2461; Percent complete: 61.5%; Average loss: 2.8949
Iteration: 2462; Percent complete: 61.6%; Average loss: 3.0048
Iteration: 2463; Percent complete: 61.6%; Average loss: 2.9485
Iteration: 2464; Percent complete: 61.6%; Average loss: 2.8120
Iteration: 2465; Percent complete: 61.6%; Average loss: 2.8635
Iteration: 2466; Percent complete: 61.7%; Average loss: 2.9085
Iteration: 2467; Percent complete: 61.7%; Average loss: 3.1177
Iteration: 2468; Percent complete: 61.7%; Average loss: 3.3046
Iteration: 2469; Percent complete: 61.7%; Average loss: 2.9799
Iteration: 2470; Percent complete: 61.8%; Average loss: 3.1772
Iteration: 2471; Percent complete: 61.8%; Average loss: 3.0362
Iteration: 2472; Percent complete: 61.8%; Average loss: 3.0431
Iteration: 2473; Percent complete: 61.8%; Average loss: 3.1844
Iteration: 2474; Percent complete: 61.9%; Average loss: 2.9282
Iteration: 2475; Percent complete: 61.9%; Average loss: 2.9836
Iteration: 2476; Percent complete: 61.9%; Average loss: 3.1135
Iteration: 2477; Percent complete: 61.9%; Average loss: 3.0279
Iteration: 2478; Percent complete: 62.0%; Average loss: 3.0490
Iteration: 2479; Percent complete: 62.0%; Average loss: 3.3429
Iteration: 2480; Percent complete: 62.0%; Average loss: 2.8553
Iteration: 2481; Percent complete: 62.0%; Average loss: 3.0003
Iteration: 2482; Percent complete: 62.1%; Average loss: 2.9152
Iteration: 2483; Percent complete: 62.1%; Average loss: 3.0351
Iteration: 2484; Percent complete: 62.1%; Average loss: 3.2692
Iteration: 2485; Percent complete: 62.1%; Average loss: 2.9124
Iteration: 2486; Percent complete: 62.2%; Average loss: 2.7522
Iteration: 2487; Percent complete: 62.2%; Average loss: 3.1598
Iteration: 2488; Percent complete: 62.2%; Average loss: 2.9906
Iteration: 2489; Percent complete: 62.2%; Average loss: 3.1007
Iteration: 2490; Percent complete: 62.3%; Average loss: 3.1057
Iteration: 2491; Percent complete: 62.3%; Average loss: 3.1090
Iteration: 2492; Percent complete: 62.3%; Average loss: 3.0952
Iteration: 2493; Percent complete: 62.3%; Average loss: 3.0191
Iteration: 2494; Percent complete: 62.4%; Average loss: 2.9930
Iteration: 2495; Percent complete: 62.4%; Average loss: 3.0393
Iteration: 2496; Percent complete: 62.4%; Average loss: 2.8506
Iteration: 2497; Percent complete: 62.4%; Average loss: 3.1683
Iteration: 2498; Percent complete: 62.5%; Average loss: 2.4656
Iteration: 2499; Percent complete: 62.5%; Average loss: 2.7649
Iteration: 2500; Percent complete: 62.5%; Average loss: 2.9984
Iteration: 2501; Percent complete: 62.5%; Average loss: 2.9490
Iteration: 2502; Percent complete: 62.5%; Average loss: 3.1338
Iteration: 2503; Percent complete: 62.6%; Average loss: 2.9660
Iteration: 2504; Percent complete: 62.6%; Average loss: 2.9648
Iteration: 2505; Percent complete: 62.6%; Average loss: 3.0502
Iteration: 2506; Percent complete: 62.6%; Average loss: 3.1436
Iteration: 2507; Percent complete: 62.7%; Average loss: 3.0336
Iteration: 2508; Percent complete: 62.7%; Average loss: 2.9236
Iteration: 2509; Percent complete: 62.7%; Average loss: 3.1228
Iteration: 2510; Percent complete: 62.7%; Average loss: 3.0429
Iteration: 2511; Percent complete: 62.8%; Average loss: 3.1820
Iteration: 2512; Percent complete: 62.8%; Average loss: 2.7682
Iteration: 2513; Percent complete: 62.8%; Average loss: 3.1359
Iteration: 2514; Percent complete: 62.8%; Average loss: 2.8487
Iteration: 2515; Percent complete: 62.9%; Average loss: 2.9706
Iteration: 2516; Percent complete: 62.9%; Average loss: 3.4241
Iteration: 2517; Percent complete: 62.9%; Average loss: 2.9920
Iteration: 2518; Percent complete: 62.9%; Average loss: 2.8228
Iteration: 2519; Percent complete: 63.0%; Average loss: 3.0278
Iteration: 2520; Percent complete: 63.0%; Average loss: 2.8330
Iteration: 2521; Percent complete: 63.0%; Average loss: 3.1360
Iteration: 2522; Percent complete: 63.0%; Average loss: 2.9688
Iteration: 2523; Percent complete: 63.1%; Average loss: 2.8565
Iteration: 2524; Percent complete: 63.1%; Average loss: 2.8378
Iteration: 2525; Percent complete: 63.1%; Average loss: 2.8853
Iteration: 2526; Percent complete: 63.1%; Average loss: 2.8484
Iteration: 2527; Percent complete: 63.2%; Average loss: 2.9157
Iteration: 2528; Percent complete: 63.2%; Average loss: 2.8188
Iteration: 2529; Percent complete: 63.2%; Average loss: 3.1287
Iteration: 2530; Percent complete: 63.2%; Average loss: 2.9578
Iteration: 2531; Percent complete: 63.3%; Average loss: 2.9819
Iteration: 2532; Percent complete: 63.3%; Average loss: 3.0421
Iteration: 2533; Percent complete: 63.3%; Average loss: 2.9535
Iteration: 2534; Percent complete: 63.3%; Average loss: 3.0108
Iteration: 2535; Percent complete: 63.4%; Average loss: 3.0740
Iteration: 2536; Percent complete: 63.4%; Average loss: 2.9630
Iteration: 2537; Percent complete: 63.4%; Average loss: 2.8090
Iteration: 2538; Percent complete: 63.4%; Average loss: 2.8711
Iteration: 2539; Percent complete: 63.5%; Average loss: 3.0255
Iteration: 2540; Percent complete: 63.5%; Average loss: 3.0759
Iteration: 2541; Percent complete: 63.5%; Average loss: 3.3162
Iteration: 2542; Percent complete: 63.5%; Average loss: 2.7595
Iteration: 2543; Percent complete: 63.6%; Average loss: 2.8221
Iteration: 2544; Percent complete: 63.6%; Average loss: 3.0542
Iteration: 2545; Percent complete: 63.6%; Average loss: 2.8890
Iteration: 2546; Percent complete: 63.6%; Average loss: 3.0975
Iteration: 2547; Percent complete: 63.7%; Average loss: 2.9683
Iteration: 2548; Percent complete: 63.7%; Average loss: 3.1038
Iteration: 2549; Percent complete: 63.7%; Average loss: 2.8711
Iteration: 2550; Percent complete: 63.7%; Average loss: 3.0945
Iteration: 2551; Percent complete: 63.8%; Average loss: 3.2390
Iteration: 2552; Percent complete: 63.8%; Average loss: 2.9315
Iteration: 2553; Percent complete: 63.8%; Average loss: 3.0221
Iteration: 2554; Percent complete: 63.8%; Average loss: 2.9148
Iteration: 2555; Percent complete: 63.9%; Average loss: 2.8723
Iteration: 2556; Percent complete: 63.9%; Average loss: 2.9522
Iteration: 2557; Percent complete: 63.9%; Average loss: 3.0551
Iteration: 2558; Percent complete: 63.9%; Average loss: 3.0630
Iteration: 2559; Percent complete: 64.0%; Average loss: 3.0542
Iteration: 2560; Percent complete: 64.0%; Average loss: 3.3543
Iteration: 2561; Percent complete: 64.0%; Average loss: 2.9076
Iteration: 2562; Percent complete: 64.0%; Average loss: 3.3928
Iteration: 2563; Percent complete: 64.1%; Average loss: 2.8377
Iteration: 2564; Percent complete: 64.1%; Average loss: 3.0109
Iteration: 2565; Percent complete: 64.1%; Average loss: 3.0804
Iteration: 2566; Percent complete: 64.1%; Average loss: 3.1380
Iteration: 2567; Percent complete: 64.2%; Average loss: 3.0457
Iteration: 2568; Percent complete: 64.2%; Average loss: 3.1853
Iteration: 2569; Percent complete: 64.2%; Average loss: 3.2779
Iteration: 2570; Percent complete: 64.2%; Average loss: 3.0252
Iteration: 2571; Percent complete: 64.3%; Average loss: 2.7424
Iteration: 2572; Percent complete: 64.3%; Average loss: 2.9797
Iteration: 2573; Percent complete: 64.3%; Average loss: 2.8841
Iteration: 2574; Percent complete: 64.3%; Average loss: 2.9398
Iteration: 2575; Percent complete: 64.4%; Average loss: 2.8859
Iteration: 2576; Percent complete: 64.4%; Average loss: 3.0507
Iteration: 2577; Percent complete: 64.4%; Average loss: 3.0297
Iteration: 2578; Percent complete: 64.5%; Average loss: 2.7405
Iteration: 2579; Percent complete: 64.5%; Average loss: 2.8616
Iteration: 2580; Percent complete: 64.5%; Average loss: 2.8869
Iteration: 2581; Percent complete: 64.5%; Average loss: 2.8200
Iteration: 2582; Percent complete: 64.5%; Average loss: 3.2714
Iteration: 2583; Percent complete: 64.6%; Average loss: 3.1766
Iteration: 2584; Percent complete: 64.6%; Average loss: 3.0732
Iteration: 2585; Percent complete: 64.6%; Average loss: 3.0381
Iteration: 2586; Percent complete: 64.6%; Average loss: 3.0344
Iteration: 2587; Percent complete: 64.7%; Average loss: 3.1931
Iteration: 2588; Percent complete: 64.7%; Average loss: 3.1634
Iteration: 2589; Percent complete: 64.7%; Average loss: 2.9440
Iteration: 2590; Percent complete: 64.8%; Average loss: 3.1085
Iteration: 2591; Percent complete: 64.8%; Average loss: 2.9324
Iteration: 2592; Percent complete: 64.8%; Average loss: 2.9073
Iteration: 2593; Percent complete: 64.8%; Average loss: 3.0375
Iteration: 2594; Percent complete: 64.8%; Average loss: 2.9777
Iteration: 2595; Percent complete: 64.9%; Average loss: 2.8794
Iteration: 2596; Percent complete: 64.9%; Average loss: 2.8438
Iteration: 2597; Percent complete: 64.9%; Average loss: 2.9707
Iteration: 2598; Percent complete: 65.0%; Average loss: 2.8337
Iteration: 2599; Percent complete: 65.0%; Average loss: 3.1256
Iteration: 2600; Percent complete: 65.0%; Average loss: 2.9385
Iteration: 2601; Percent complete: 65.0%; Average loss: 2.8205
Iteration: 2602; Percent complete: 65.0%; Average loss: 2.9443
Iteration: 2603; Percent complete: 65.1%; Average loss: 3.0718
Iteration: 2604; Percent complete: 65.1%; Average loss: 2.8659
Iteration: 2605; Percent complete: 65.1%; Average loss: 3.0953
Iteration: 2606; Percent complete: 65.1%; Average loss: 2.9391
Iteration: 2607; Percent complete: 65.2%; Average loss: 3.1425
Iteration: 2608; Percent complete: 65.2%; Average loss: 3.1182
Iteration: 2609; Percent complete: 65.2%; Average loss: 2.9246
Iteration: 2610; Percent complete: 65.2%; Average loss: 3.2087
Iteration: 2611; Percent complete: 65.3%; Average loss: 3.0947
Iteration: 2612; Percent complete: 65.3%; Average loss: 3.0252
Iteration: 2613; Percent complete: 65.3%; Average loss: 3.2833
Iteration: 2614; Percent complete: 65.3%; Average loss: 3.0293
Iteration: 2615; Percent complete: 65.4%; Average loss: 2.7605
Iteration: 2616; Percent complete: 65.4%; Average loss: 3.1400
Iteration: 2617; Percent complete: 65.4%; Average loss: 3.1642
Iteration: 2618; Percent complete: 65.5%; Average loss: 3.1053
Iteration: 2619; Percent complete: 65.5%; Average loss: 3.0474
Iteration: 2620; Percent complete: 65.5%; Average loss: 2.8085
Iteration: 2621; Percent complete: 65.5%; Average loss: 3.1285
Iteration: 2622; Percent complete: 65.5%; Average loss: 3.1809
Iteration: 2623; Percent complete: 65.6%; Average loss: 3.3411
Iteration: 2624; Percent complete: 65.6%; Average loss: 2.9607
Iteration: 2625; Percent complete: 65.6%; Average loss: 3.4894
Iteration: 2626; Percent complete: 65.6%; Average loss: 3.2052
Iteration: 2627; Percent complete: 65.7%; Average loss: 2.9260
Iteration: 2628; Percent complete: 65.7%; Average loss: 2.8175
Iteration: 2629; Percent complete: 65.7%; Average loss: 3.0346
Iteration: 2630; Percent complete: 65.8%; Average loss: 3.1389
Iteration: 2631; Percent complete: 65.8%; Average loss: 3.0932
Iteration: 2632; Percent complete: 65.8%; Average loss: 3.0320
Iteration: 2633; Percent complete: 65.8%; Average loss: 3.0601
Iteration: 2634; Percent complete: 65.8%; Average loss: 2.9677
Iteration: 2635; Percent complete: 65.9%; Average loss: 3.0073
Iteration: 2636; Percent complete: 65.9%; Average loss: 2.8931
Iteration: 2637; Percent complete: 65.9%; Average loss: 2.8088
Iteration: 2638; Percent complete: 66.0%; Average loss: 3.2593
Iteration: 2639; Percent complete: 66.0%; Average loss: 2.8527
Iteration: 2640; Percent complete: 66.0%; Average loss: 3.0072
Iteration: 2641; Percent complete: 66.0%; Average loss: 3.0191
Iteration: 2642; Percent complete: 66.0%; Average loss: 2.8725
Iteration: 2643; Percent complete: 66.1%; Average loss: 2.9023
Iteration: 2644; Percent complete: 66.1%; Average loss: 2.8915
Iteration: 2645; Percent complete: 66.1%; Average loss: 2.9539
Iteration: 2646; Percent complete: 66.1%; Average loss: 2.8541
Iteration: 2647; Percent complete: 66.2%; Average loss: 2.7885
Iteration: 2648; Percent complete: 66.2%; Average loss: 3.2003
Iteration: 2649; Percent complete: 66.2%; Average loss: 2.7546
Iteration: 2650; Percent complete: 66.2%; Average loss: 3.1806
Iteration: 2651; Percent complete: 66.3%; Average loss: 2.7872
Iteration: 2652; Percent complete: 66.3%; Average loss: 3.0515
Iteration: 2653; Percent complete: 66.3%; Average loss: 3.1512
Iteration: 2654; Percent complete: 66.3%; Average loss: 2.9163
Iteration: 2655; Percent complete: 66.4%; Average loss: 3.2837
Iteration: 2656; Percent complete: 66.4%; Average loss: 3.1166
Iteration: 2657; Percent complete: 66.4%; Average loss: 2.9040
Iteration: 2658; Percent complete: 66.5%; Average loss: 3.0373
Iteration: 2659; Percent complete: 66.5%; Average loss: 3.0619
Iteration: 2660; Percent complete: 66.5%; Average loss: 2.7946
Iteration: 2661; Percent complete: 66.5%; Average loss: 2.8884
Iteration: 2662; Percent complete: 66.5%; Average loss: 2.9259
Iteration: 2663; Percent complete: 66.6%; Average loss: 2.7312
Iteration: 2664; Percent complete: 66.6%; Average loss: 3.0488
Iteration: 2665; Percent complete: 66.6%; Average loss: 2.9227
Iteration: 2666; Percent complete: 66.6%; Average loss: 2.8298
Iteration: 2667; Percent complete: 66.7%; Average loss: 3.1055
Iteration: 2668; Percent complete: 66.7%; Average loss: 2.9573
Iteration: 2669; Percent complete: 66.7%; Average loss: 2.9228
Iteration: 2670; Percent complete: 66.8%; Average loss: 3.0691
Iteration: 2671; Percent complete: 66.8%; Average loss: 3.0939
Iteration: 2672; Percent complete: 66.8%; Average loss: 3.0935
Iteration: 2673; Percent complete: 66.8%; Average loss: 3.0131
Iteration: 2674; Percent complete: 66.8%; Average loss: 2.5980
Iteration: 2675; Percent complete: 66.9%; Average loss: 3.0360
Iteration: 2676; Percent complete: 66.9%; Average loss: 2.9322
Iteration: 2677; Percent complete: 66.9%; Average loss: 3.1291
Iteration: 2678; Percent complete: 67.0%; Average loss: 2.7842
Iteration: 2679; Percent complete: 67.0%; Average loss: 3.1803
Iteration: 2680; Percent complete: 67.0%; Average loss: 2.8317
Iteration: 2681; Percent complete: 67.0%; Average loss: 2.7831
Iteration: 2682; Percent complete: 67.0%; Average loss: 2.9472
Iteration: 2683; Percent complete: 67.1%; Average loss: 2.9085
Iteration: 2684; Percent complete: 67.1%; Average loss: 2.8021
Iteration: 2685; Percent complete: 67.1%; Average loss: 2.9668
Iteration: 2686; Percent complete: 67.2%; Average loss: 3.1800
Iteration: 2687; Percent complete: 67.2%; Average loss: 2.8717
Iteration: 2688; Percent complete: 67.2%; Average loss: 2.9049
Iteration: 2689; Percent complete: 67.2%; Average loss: 2.9964
Iteration: 2690; Percent complete: 67.2%; Average loss: 3.1431
Iteration: 2691; Percent complete: 67.3%; Average loss: 2.9273
Iteration: 2692; Percent complete: 67.3%; Average loss: 2.9533
Iteration: 2693; Percent complete: 67.3%; Average loss: 3.0808
Iteration: 2694; Percent complete: 67.3%; Average loss: 3.0876
Iteration: 2695; Percent complete: 67.4%; Average loss: 3.1609
Iteration: 2696; Percent complete: 67.4%; Average loss: 2.8757
Iteration: 2697; Percent complete: 67.4%; Average loss: 3.2355
Iteration: 2698; Percent complete: 67.5%; Average loss: 3.2091
Iteration: 2699; Percent complete: 67.5%; Average loss: 2.9052
Iteration: 2700; Percent complete: 67.5%; Average loss: 2.7993
Iteration: 2701; Percent complete: 67.5%; Average loss: 2.9508
Iteration: 2702; Percent complete: 67.5%; Average loss: 2.9339
Iteration: 2703; Percent complete: 67.6%; Average loss: 3.0579
Iteration: 2704; Percent complete: 67.6%; Average loss: 3.1128
Iteration: 2705; Percent complete: 67.6%; Average loss: 3.0010
Iteration: 2706; Percent complete: 67.7%; Average loss: 3.0189
Iteration: 2707; Percent complete: 67.7%; Average loss: 2.9975
Iteration: 2708; Percent complete: 67.7%; Average loss: 2.9269
Iteration: 2709; Percent complete: 67.7%; Average loss: 2.9760
Iteration: 2710; Percent complete: 67.8%; Average loss: 3.0431
Iteration: 2711; Percent complete: 67.8%; Average loss: 3.1741
Iteration: 2712; Percent complete: 67.8%; Average loss: 3.0742
Iteration: 2713; Percent complete: 67.8%; Average loss: 2.9163
Iteration: 2714; Percent complete: 67.8%; Average loss: 2.7940
Iteration: 2715; Percent complete: 67.9%; Average loss: 2.9498
Iteration: 2716; Percent complete: 67.9%; Average loss: 3.1023
Iteration: 2717; Percent complete: 67.9%; Average loss: 3.0977
Iteration: 2718; Percent complete: 68.0%; Average loss: 3.1145
Iteration: 2719; Percent complete: 68.0%; Average loss: 2.9484
Iteration: 2720; Percent complete: 68.0%; Average loss: 2.9727
Iteration: 2721; Percent complete: 68.0%; Average loss: 3.1399
Iteration: 2722; Percent complete: 68.0%; Average loss: 3.0568
Iteration: 2723; Percent complete: 68.1%; Average loss: 2.8531
Iteration: 2724; Percent complete: 68.1%; Average loss: 2.6536
Iteration: 2725; Percent complete: 68.1%; Average loss: 2.9434
Iteration: 2726; Percent complete: 68.2%; Average loss: 2.7856
Iteration: 2727; Percent complete: 68.2%; Average loss: 2.9548
Iteration: 2728; Percent complete: 68.2%; Average loss: 2.7842
Iteration: 2729; Percent complete: 68.2%; Average loss: 2.7598
Iteration: 2730; Percent complete: 68.2%; Average loss: 2.8051
Iteration: 2731; Percent complete: 68.3%; Average loss: 2.8881
Iteration: 2732; Percent complete: 68.3%; Average loss: 2.8414
Iteration: 2733; Percent complete: 68.3%; Average loss: 2.9545
Iteration: 2734; Percent complete: 68.3%; Average loss: 2.9953
Iteration: 2735; Percent complete: 68.4%; Average loss: 2.7107
Iteration: 2736; Percent complete: 68.4%; Average loss: 2.9044
Iteration: 2737; Percent complete: 68.4%; Average loss: 2.9608
Iteration: 2738; Percent complete: 68.5%; Average loss: 2.9856
Iteration: 2739; Percent complete: 68.5%; Average loss: 2.9045
Iteration: 2740; Percent complete: 68.5%; Average loss: 3.0494
Iteration: 2741; Percent complete: 68.5%; Average loss: 3.0147
Iteration: 2742; Percent complete: 68.5%; Average loss: 2.9637
Iteration: 2743; Percent complete: 68.6%; Average loss: 2.8504
Iteration: 2744; Percent complete: 68.6%; Average loss: 2.9723
Iteration: 2745; Percent complete: 68.6%; Average loss: 3.0476
Iteration: 2746; Percent complete: 68.7%; Average loss: 2.9852
Iteration: 2747; Percent complete: 68.7%; Average loss: 2.7783
Iteration: 2748; Percent complete: 68.7%; Average loss: 2.9127
Iteration: 2749; Percent complete: 68.7%; Average loss: 2.8167
Iteration: 2750; Percent complete: 68.8%; Average loss: 2.8957
Iteration: 2751; Percent complete: 68.8%; Average loss: 2.7414
Iteration: 2752; Percent complete: 68.8%; Average loss: 3.1003
Iteration: 2753; Percent complete: 68.8%; Average loss: 2.8766
Iteration: 2754; Percent complete: 68.8%; Average loss: 2.9823
Iteration: 2755; Percent complete: 68.9%; Average loss: 3.1317
Iteration: 2756; Percent complete: 68.9%; Average loss: 2.9667
Iteration: 2757; Percent complete: 68.9%; Average loss: 3.0446
Iteration: 2758; Percent complete: 69.0%; Average loss: 2.9396
Iteration: 2759; Percent complete: 69.0%; Average loss: 2.8200
Iteration: 2760; Percent complete: 69.0%; Average loss: 3.1833
Iteration: 2761; Percent complete: 69.0%; Average loss: 2.8301
Iteration: 2762; Percent complete: 69.0%; Average loss: 2.8434
Iteration: 2763; Percent complete: 69.1%; Average loss: 2.7499
Iteration: 2764; Percent complete: 69.1%; Average loss: 2.7716
Iteration: 2765; Percent complete: 69.1%; Average loss: 3.0169
Iteration: 2766; Percent complete: 69.2%; Average loss: 3.0733
Iteration: 2767; Percent complete: 69.2%; Average loss: 3.0927
Iteration: 2768; Percent complete: 69.2%; Average loss: 2.9122
Iteration: 2769; Percent complete: 69.2%; Average loss: 3.1390
Iteration: 2770; Percent complete: 69.2%; Average loss: 3.0669
Iteration: 2771; Percent complete: 69.3%; Average loss: 2.9417
Iteration: 2772; Percent complete: 69.3%; Average loss: 2.7404
Iteration: 2773; Percent complete: 69.3%; Average loss: 3.1457
Iteration: 2774; Percent complete: 69.3%; Average loss: 2.8604
Iteration: 2775; Percent complete: 69.4%; Average loss: 2.8312
Iteration: 2776; Percent complete: 69.4%; Average loss: 2.7824
Iteration: 2777; Percent complete: 69.4%; Average loss: 3.2094
Iteration: 2778; Percent complete: 69.5%; Average loss: 2.9089
Iteration: 2779; Percent complete: 69.5%; Average loss: 3.0228
Iteration: 2780; Percent complete: 69.5%; Average loss: 3.2853
Iteration: 2781; Percent complete: 69.5%; Average loss: 2.8226
Iteration: 2782; Percent complete: 69.5%; Average loss: 3.0691
Iteration: 2783; Percent complete: 69.6%; Average loss: 3.2163
Iteration: 2784; Percent complete: 69.6%; Average loss: 3.1688
Iteration: 2785; Percent complete: 69.6%; Average loss: 3.0436
Iteration: 2786; Percent complete: 69.7%; Average loss: 2.8971
Iteration: 2787; Percent complete: 69.7%; Average loss: 2.9059
Iteration: 2788; Percent complete: 69.7%; Average loss: 3.1242
Iteration: 2789; Percent complete: 69.7%; Average loss: 2.8009
Iteration: 2790; Percent complete: 69.8%; Average loss: 2.8447
Iteration: 2791; Percent complete: 69.8%; Average loss: 2.7433
Iteration: 2792; Percent complete: 69.8%; Average loss: 2.6556
Iteration: 2793; Percent complete: 69.8%; Average loss: 2.9304
Iteration: 2794; Percent complete: 69.8%; Average loss: 3.2843
Iteration: 2795; Percent complete: 69.9%; Average loss: 2.8997
Iteration: 2796; Percent complete: 69.9%; Average loss: 2.8800
Iteration: 2797; Percent complete: 69.9%; Average loss: 2.7970
Iteration: 2798; Percent complete: 70.0%; Average loss: 3.0625
Iteration: 2799; Percent complete: 70.0%; Average loss: 2.8446
Iteration: 2800; Percent complete: 70.0%; Average loss: 2.9229
Iteration: 2801; Percent complete: 70.0%; Average loss: 2.8371
Iteration: 2802; Percent complete: 70.0%; Average loss: 2.9717
Iteration: 2803; Percent complete: 70.1%; Average loss: 3.0377
Iteration: 2804; Percent complete: 70.1%; Average loss: 3.1995
Iteration: 2805; Percent complete: 70.1%; Average loss: 2.7956
Iteration: 2806; Percent complete: 70.2%; Average loss: 2.9512
Iteration: 2807; Percent complete: 70.2%; Average loss: 2.9685
Iteration: 2808; Percent complete: 70.2%; Average loss: 2.8148
Iteration: 2809; Percent complete: 70.2%; Average loss: 3.1331
Iteration: 2810; Percent complete: 70.2%; Average loss: 3.1371
Iteration: 2811; Percent complete: 70.3%; Average loss: 3.0554
Iteration: 2812; Percent complete: 70.3%; Average loss: 2.7708
Iteration: 2813; Percent complete: 70.3%; Average loss: 2.8129
Iteration: 2814; Percent complete: 70.3%; Average loss: 2.6428
Iteration: 2815; Percent complete: 70.4%; Average loss: 2.8767
Iteration: 2816; Percent complete: 70.4%; Average loss: 3.0525
Iteration: 2817; Percent complete: 70.4%; Average loss: 2.9056
Iteration: 2818; Percent complete: 70.5%; Average loss: 2.8350
Iteration: 2819; Percent complete: 70.5%; Average loss: 2.7512
Iteration: 2820; Percent complete: 70.5%; Average loss: 2.9960
Iteration: 2821; Percent complete: 70.5%; Average loss: 3.0205
Iteration: 2822; Percent complete: 70.5%; Average loss: 2.9552
Iteration: 2823; Percent complete: 70.6%; Average loss: 2.9343
Iteration: 2824; Percent complete: 70.6%; Average loss: 2.8198
Iteration: 2825; Percent complete: 70.6%; Average loss: 2.9701
Iteration: 2826; Percent complete: 70.7%; Average loss: 2.9541
Iteration: 2827; Percent complete: 70.7%; Average loss: 3.0133
Iteration: 2828; Percent complete: 70.7%; Average loss: 2.9075
Iteration: 2829; Percent complete: 70.7%; Average loss: 2.9300
Iteration: 2830; Percent complete: 70.8%; Average loss: 2.9523
Iteration: 2831; Percent complete: 70.8%; Average loss: 2.9285
Iteration: 2832; Percent complete: 70.8%; Average loss: 2.8543
Iteration: 2833; Percent complete: 70.8%; Average loss: 2.6336
Iteration: 2834; Percent complete: 70.9%; Average loss: 2.8145
Iteration: 2835; Percent complete: 70.9%; Average loss: 2.8083
Iteration: 2836; Percent complete: 70.9%; Average loss: 2.9936
Iteration: 2837; Percent complete: 70.9%; Average loss: 2.9059
Iteration: 2838; Percent complete: 71.0%; Average loss: 2.7133
Iteration: 2839; Percent complete: 71.0%; Average loss: 2.7972
Iteration: 2840; Percent complete: 71.0%; Average loss: 3.0767
Iteration: 2841; Percent complete: 71.0%; Average loss: 2.9291
Iteration: 2842; Percent complete: 71.0%; Average loss: 3.1457
Iteration: 2843; Percent complete: 71.1%; Average loss: 2.9718
Iteration: 2844; Percent complete: 71.1%; Average loss: 2.9466
Iteration: 2845; Percent complete: 71.1%; Average loss: 3.0625
Iteration: 2846; Percent complete: 71.2%; Average loss: 2.6934
Iteration: 2847; Percent complete: 71.2%; Average loss: 3.2291
Iteration: 2848; Percent complete: 71.2%; Average loss: 2.7691
Iteration: 2849; Percent complete: 71.2%; Average loss: 2.9211
Iteration: 2850; Percent complete: 71.2%; Average loss: 2.9161
Iteration: 2851; Percent complete: 71.3%; Average loss: 2.8209
Iteration: 2852; Percent complete: 71.3%; Average loss: 2.9422
Iteration: 2853; Percent complete: 71.3%; Average loss: 2.9640
Iteration: 2854; Percent complete: 71.4%; Average loss: 2.9492
Iteration: 2855; Percent complete: 71.4%; Average loss: 2.9913
Iteration: 2856; Percent complete: 71.4%; Average loss: 2.6989
Iteration: 2857; Percent complete: 71.4%; Average loss: 2.7961
Iteration: 2858; Percent complete: 71.5%; Average loss: 2.7481
Iteration: 2859; Percent complete: 71.5%; Average loss: 2.7513
Iteration: 2860; Percent complete: 71.5%; Average loss: 3.0814
Iteration: 2861; Percent complete: 71.5%; Average loss: 3.0451
Iteration: 2862; Percent complete: 71.5%; Average loss: 3.1506
Iteration: 2863; Percent complete: 71.6%; Average loss: 3.1289
Iteration: 2864; Percent complete: 71.6%; Average loss: 3.0321
Iteration: 2865; Percent complete: 71.6%; Average loss: 2.8906
Iteration: 2866; Percent complete: 71.7%; Average loss: 3.0756
Iteration: 2867; Percent complete: 71.7%; Average loss: 2.8361
Iteration: 2868; Percent complete: 71.7%; Average loss: 2.9921
Iteration: 2869; Percent complete: 71.7%; Average loss: 2.8071
Iteration: 2870; Percent complete: 71.8%; Average loss: 2.8816
Iteration: 2871; Percent complete: 71.8%; Average loss: 2.9989
Iteration: 2872; Percent complete: 71.8%; Average loss: 2.9558
Iteration: 2873; Percent complete: 71.8%; Average loss: 2.8957
Iteration: 2874; Percent complete: 71.9%; Average loss: 2.8364
Iteration: 2875; Percent complete: 71.9%; Average loss: 2.8840
Iteration: 2876; Percent complete: 71.9%; Average loss: 2.7791
Iteration: 2877; Percent complete: 71.9%; Average loss: 3.1184
Iteration: 2878; Percent complete: 72.0%; Average loss: 2.9934
Iteration: 2879; Percent complete: 72.0%; Average loss: 2.9998
Iteration: 2880; Percent complete: 72.0%; Average loss: 3.1839
Iteration: 2881; Percent complete: 72.0%; Average loss: 2.6658
Iteration: 2882; Percent complete: 72.0%; Average loss: 2.8380
Iteration: 2883; Percent complete: 72.1%; Average loss: 3.0747
Iteration: 2884; Percent complete: 72.1%; Average loss: 3.0512
Iteration: 2885; Percent complete: 72.1%; Average loss: 2.9124
Iteration: 2886; Percent complete: 72.2%; Average loss: 2.9889
Iteration: 2887; Percent complete: 72.2%; Average loss: 3.0066
Iteration: 2888; Percent complete: 72.2%; Average loss: 3.0519
Iteration: 2889; Percent complete: 72.2%; Average loss: 3.1093
Iteration: 2890; Percent complete: 72.2%; Average loss: 2.5296
Iteration: 2891; Percent complete: 72.3%; Average loss: 2.8050
Iteration: 2892; Percent complete: 72.3%; Average loss: 3.0930
Iteration: 2893; Percent complete: 72.3%; Average loss: 2.9321
Iteration: 2894; Percent complete: 72.4%; Average loss: 2.7824
Iteration: 2895; Percent complete: 72.4%; Average loss: 3.0627
Iteration: 2896; Percent complete: 72.4%; Average loss: 2.9109
Iteration: 2897; Percent complete: 72.4%; Average loss: 3.1128
Iteration: 2898; Percent complete: 72.5%; Average loss: 3.0624
Iteration: 2899; Percent complete: 72.5%; Average loss: 2.8337
Iteration: 2900; Percent complete: 72.5%; Average loss: 2.9353
Iteration: 2901; Percent complete: 72.5%; Average loss: 2.8352
Iteration: 2902; Percent complete: 72.5%; Average loss: 2.8547
Iteration: 2903; Percent complete: 72.6%; Average loss: 2.7866
Iteration: 2904; Percent complete: 72.6%; Average loss: 2.9679
Iteration: 2905; Percent complete: 72.6%; Average loss: 2.7412
Iteration: 2906; Percent complete: 72.7%; Average loss: 2.9479
Iteration: 2907; Percent complete: 72.7%; Average loss: 3.0245
Iteration: 2908; Percent complete: 72.7%; Average loss: 2.9402
Iteration: 2909; Percent complete: 72.7%; Average loss: 3.1643
Iteration: 2910; Percent complete: 72.8%; Average loss: 2.9050
Iteration: 2911; Percent complete: 72.8%; Average loss: 2.8915
Iteration: 2912; Percent complete: 72.8%; Average loss: 2.9924
Iteration: 2913; Percent complete: 72.8%; Average loss: 2.9369
Iteration: 2914; Percent complete: 72.9%; Average loss: 2.8119
Iteration: 2915; Percent complete: 72.9%; Average loss: 3.0132
Iteration: 2916; Percent complete: 72.9%; Average loss: 2.8073
Iteration: 2917; Percent complete: 72.9%; Average loss: 2.8673
Iteration: 2918; Percent complete: 73.0%; Average loss: 2.9401
Iteration: 2919; Percent complete: 73.0%; Average loss: 3.0250
Iteration: 2920; Percent complete: 73.0%; Average loss: 2.8419
Iteration: 2921; Percent complete: 73.0%; Average loss: 2.9810
Iteration: 2922; Percent complete: 73.0%; Average loss: 3.1508
Iteration: 2923; Percent complete: 73.1%; Average loss: 2.8742
Iteration: 2924; Percent complete: 73.1%; Average loss: 3.0848
Iteration: 2925; Percent complete: 73.1%; Average loss: 3.0750
Iteration: 2926; Percent complete: 73.2%; Average loss: 3.0273
Iteration: 2927; Percent complete: 73.2%; Average loss: 2.9981
Iteration: 2928; Percent complete: 73.2%; Average loss: 3.0204
Iteration: 2929; Percent complete: 73.2%; Average loss: 2.8121
Iteration: 2930; Percent complete: 73.2%; Average loss: 2.8903
Iteration: 2931; Percent complete: 73.3%; Average loss: 2.6663
Iteration: 2932; Percent complete: 73.3%; Average loss: 2.8962
Iteration: 2933; Percent complete: 73.3%; Average loss: 2.9628
Iteration: 2934; Percent complete: 73.4%; Average loss: 2.8666
Iteration: 2935; Percent complete: 73.4%; Average loss: 2.9748
Iteration: 2936; Percent complete: 73.4%; Average loss: 2.8633
Iteration: 2937; Percent complete: 73.4%; Average loss: 2.7563
Iteration: 2938; Percent complete: 73.5%; Average loss: 3.0013
Iteration: 2939; Percent complete: 73.5%; Average loss: 2.9412
Iteration: 2940; Percent complete: 73.5%; Average loss: 3.0700
Iteration: 2941; Percent complete: 73.5%; Average loss: 3.0409
Iteration: 2942; Percent complete: 73.6%; Average loss: 2.9867
Iteration: 2943; Percent complete: 73.6%; Average loss: 2.7336
Iteration: 2944; Percent complete: 73.6%; Average loss: 3.1712
Iteration: 2945; Percent complete: 73.6%; Average loss: 2.9057
Iteration: 2946; Percent complete: 73.7%; Average loss: 2.5447
Iteration: 2947; Percent complete: 73.7%; Average loss: 2.8840
Iteration: 2948; Percent complete: 73.7%; Average loss: 2.8880
Iteration: 2949; Percent complete: 73.7%; Average loss: 2.9634
Iteration: 2950; Percent complete: 73.8%; Average loss: 2.7236
Iteration: 2951; Percent complete: 73.8%; Average loss: 2.8457
Iteration: 2952; Percent complete: 73.8%; Average loss: 2.9317
Iteration: 2953; Percent complete: 73.8%; Average loss: 2.7485
Iteration: 2954; Percent complete: 73.9%; Average loss: 2.9020
Iteration: 2955; Percent complete: 73.9%; Average loss: 2.8707
Iteration: 2956; Percent complete: 73.9%; Average loss: 2.8983
Iteration: 2957; Percent complete: 73.9%; Average loss: 2.7882
Iteration: 2958; Percent complete: 74.0%; Average loss: 2.8636
Iteration: 2959; Percent complete: 74.0%; Average loss: 2.9433
Iteration: 2960; Percent complete: 74.0%; Average loss: 2.8281
Iteration: 2961; Percent complete: 74.0%; Average loss: 2.7530
Iteration: 2962; Percent complete: 74.1%; Average loss: 2.8923
Iteration: 2963; Percent complete: 74.1%; Average loss: 2.9210
Iteration: 2964; Percent complete: 74.1%; Average loss: 2.9031
Iteration: 2965; Percent complete: 74.1%; Average loss: 2.8543
Iteration: 2966; Percent complete: 74.2%; Average loss: 3.0362
Iteration: 2967; Percent complete: 74.2%; Average loss: 2.8208
Iteration: 2968; Percent complete: 74.2%; Average loss: 2.9230
Iteration: 2969; Percent complete: 74.2%; Average loss: 2.7993
Iteration: 2970; Percent complete: 74.2%; Average loss: 3.0444
Iteration: 2971; Percent complete: 74.3%; Average loss: 2.9666
Iteration: 2972; Percent complete: 74.3%; Average loss: 3.0140
Iteration: 2973; Percent complete: 74.3%; Average loss: 2.9252
Iteration: 2974; Percent complete: 74.4%; Average loss: 2.9697
Iteration: 2975; Percent complete: 74.4%; Average loss: 2.9354
Iteration: 2976; Percent complete: 74.4%; Average loss: 2.9236
Iteration: 2977; Percent complete: 74.4%; Average loss: 2.8152
Iteration: 2978; Percent complete: 74.5%; Average loss: 3.0414
Iteration: 2979; Percent complete: 74.5%; Average loss: 2.9917
Iteration: 2980; Percent complete: 74.5%; Average loss: 2.7623
Iteration: 2981; Percent complete: 74.5%; Average loss: 2.9114
Iteration: 2982; Percent complete: 74.6%; Average loss: 3.0293
Iteration: 2983; Percent complete: 74.6%; Average loss: 2.7500
Iteration: 2984; Percent complete: 74.6%; Average loss: 3.0630
Iteration: 2985; Percent complete: 74.6%; Average loss: 2.9463
Iteration: 2986; Percent complete: 74.7%; Average loss: 2.7842
Iteration: 2987; Percent complete: 74.7%; Average loss: 2.8721
Iteration: 2988; Percent complete: 74.7%; Average loss: 2.9613
Iteration: 2989; Percent complete: 74.7%; Average loss: 2.7461
Iteration: 2990; Percent complete: 74.8%; Average loss: 2.6525
Iteration: 2991; Percent complete: 74.8%; Average loss: 3.0147
Iteration: 2992; Percent complete: 74.8%; Average loss: 2.7075
Iteration: 2993; Percent complete: 74.8%; Average loss: 2.8272
Iteration: 2994; Percent complete: 74.9%; Average loss: 2.9720
Iteration: 2995; Percent complete: 74.9%; Average loss: 2.8885
Iteration: 2996; Percent complete: 74.9%; Average loss: 2.7554
Iteration: 2997; Percent complete: 74.9%; Average loss: 2.7850
Iteration: 2998; Percent complete: 75.0%; Average loss: 2.7547
Iteration: 2999; Percent complete: 75.0%; Average loss: 2.8469
Iteration: 3000; Percent complete: 75.0%; Average loss: 2.8282
Iteration: 3001; Percent complete: 75.0%; Average loss: 3.0831
Iteration: 3002; Percent complete: 75.0%; Average loss: 2.9652
Iteration: 3003; Percent complete: 75.1%; Average loss: 2.7831
Iteration: 3004; Percent complete: 75.1%; Average loss: 2.7469
Iteration: 3005; Percent complete: 75.1%; Average loss: 2.9881
Iteration: 3006; Percent complete: 75.1%; Average loss: 2.8436
Iteration: 3007; Percent complete: 75.2%; Average loss: 2.6439
Iteration: 3008; Percent complete: 75.2%; Average loss: 3.0482
Iteration: 3009; Percent complete: 75.2%; Average loss: 3.0069
Iteration: 3010; Percent complete: 75.2%; Average loss: 2.8464
Iteration: 3011; Percent complete: 75.3%; Average loss: 2.6430
Iteration: 3012; Percent complete: 75.3%; Average loss: 2.9479
Iteration: 3013; Percent complete: 75.3%; Average loss: 2.7335
Iteration: 3014; Percent complete: 75.3%; Average loss: 2.7725
Iteration: 3015; Percent complete: 75.4%; Average loss: 2.7591
Iteration: 3016; Percent complete: 75.4%; Average loss: 2.6149
Iteration: 3017; Percent complete: 75.4%; Average loss: 3.0802
Iteration: 3018; Percent complete: 75.4%; Average loss: 2.7998
Iteration: 3019; Percent complete: 75.5%; Average loss: 2.8372
Iteration: 3020; Percent complete: 75.5%; Average loss: 2.7743
Iteration: 3021; Percent complete: 75.5%; Average loss: 3.1746
Iteration: 3022; Percent complete: 75.5%; Average loss: 2.9417
Iteration: 3023; Percent complete: 75.6%; Average loss: 2.7846
Iteration: 3024; Percent complete: 75.6%; Average loss: 2.9567
Iteration: 3025; Percent complete: 75.6%; Average loss: 2.9697
Iteration: 3026; Percent complete: 75.6%; Average loss: 2.7361
Iteration: 3027; Percent complete: 75.7%; Average loss: 2.8224
Iteration: 3028; Percent complete: 75.7%; Average loss: 2.9178
Iteration: 3029; Percent complete: 75.7%; Average loss: 2.8376
Iteration: 3030; Percent complete: 75.8%; Average loss: 2.8111
Iteration: 3031; Percent complete: 75.8%; Average loss: 2.6679
Iteration: 3032; Percent complete: 75.8%; Average loss: 2.7419
Iteration: 3033; Percent complete: 75.8%; Average loss: 3.2230
Iteration: 3034; Percent complete: 75.8%; Average loss: 2.9456
Iteration: 3035; Percent complete: 75.9%; Average loss: 3.0733
Iteration: 3036; Percent complete: 75.9%; Average loss: 2.7193
Iteration: 3037; Percent complete: 75.9%; Average loss: 2.8267
Iteration: 3038; Percent complete: 75.9%; Average loss: 2.9094
Iteration: 3039; Percent complete: 76.0%; Average loss: 2.7428
Iteration: 3040; Percent complete: 76.0%; Average loss: 2.9918
Iteration: 3041; Percent complete: 76.0%; Average loss: 2.8814
Iteration: 3042; Percent complete: 76.0%; Average loss: 3.0414
Iteration: 3043; Percent complete: 76.1%; Average loss: 2.7689
Iteration: 3044; Percent complete: 76.1%; Average loss: 2.9542
Iteration: 3045; Percent complete: 76.1%; Average loss: 3.0244
Iteration: 3046; Percent complete: 76.1%; Average loss: 2.6542
Iteration: 3047; Percent complete: 76.2%; Average loss: 2.6124
Iteration: 3048; Percent complete: 76.2%; Average loss: 3.1672
Iteration: 3049; Percent complete: 76.2%; Average loss: 2.9179
Iteration: 3050; Percent complete: 76.2%; Average loss: 2.7950
Iteration: 3051; Percent complete: 76.3%; Average loss: 2.8412
Iteration: 3052; Percent complete: 76.3%; Average loss: 2.8736
Iteration: 3053; Percent complete: 76.3%; Average loss: 2.6195
Iteration: 3054; Percent complete: 76.3%; Average loss: 3.0707
Iteration: 3055; Percent complete: 76.4%; Average loss: 2.8305
Iteration: 3056; Percent complete: 76.4%; Average loss: 2.8707
Iteration: 3057; Percent complete: 76.4%; Average loss: 2.7919
Iteration: 3058; Percent complete: 76.4%; Average loss: 3.1780
Iteration: 3059; Percent complete: 76.5%; Average loss: 2.9862
Iteration: 3060; Percent complete: 76.5%; Average loss: 2.8001
Iteration: 3061; Percent complete: 76.5%; Average loss: 2.7430
Iteration: 3062; Percent complete: 76.5%; Average loss: 2.8692
Iteration: 3063; Percent complete: 76.6%; Average loss: 2.8007
Iteration: 3064; Percent complete: 76.6%; Average loss: 2.8272
Iteration: 3065; Percent complete: 76.6%; Average loss: 2.5945
Iteration: 3066; Percent complete: 76.6%; Average loss: 3.1432
Iteration: 3067; Percent complete: 76.7%; Average loss: 2.6478
Iteration: 3068; Percent complete: 76.7%; Average loss: 2.6330
Iteration: 3069; Percent complete: 76.7%; Average loss: 2.8637
Iteration: 3070; Percent complete: 76.8%; Average loss: 2.8212
Iteration: 3071; Percent complete: 76.8%; Average loss: 2.6532
Iteration: 3072; Percent complete: 76.8%; Average loss: 2.7991
Iteration: 3073; Percent complete: 76.8%; Average loss: 2.9524
Iteration: 3074; Percent complete: 76.8%; Average loss: 2.8233
Iteration: 3075; Percent complete: 76.9%; Average loss: 2.7956
Iteration: 3076; Percent complete: 76.9%; Average loss: 3.2318
Iteration: 3077; Percent complete: 76.9%; Average loss: 2.8027
Iteration: 3078; Percent complete: 77.0%; Average loss: 2.6738
Iteration: 3079; Percent complete: 77.0%; Average loss: 2.9085
Iteration: 3080; Percent complete: 77.0%; Average loss: 2.7307
Iteration: 3081; Percent complete: 77.0%; Average loss: 2.8381
Iteration: 3082; Percent complete: 77.0%; Average loss: 2.9313
Iteration: 3083; Percent complete: 77.1%; Average loss: 2.8870
Iteration: 3084; Percent complete: 77.1%; Average loss: 3.2367
Iteration: 3085; Percent complete: 77.1%; Average loss: 2.8510
Iteration: 3086; Percent complete: 77.1%; Average loss: 2.7971
Iteration: 3087; Percent complete: 77.2%; Average loss: 2.9291
Iteration: 3088; Percent complete: 77.2%; Average loss: 2.8868
Iteration: 3089; Percent complete: 77.2%; Average loss: 2.8716
Iteration: 3090; Percent complete: 77.2%; Average loss: 2.6318
Iteration: 3091; Percent complete: 77.3%; Average loss: 2.8273
Iteration: 3092; Percent complete: 77.3%; Average loss: 2.8466
Iteration: 3093; Percent complete: 77.3%; Average loss: 2.8467
Iteration: 3094; Percent complete: 77.3%; Average loss: 2.6930
Iteration: 3095; Percent complete: 77.4%; Average loss: 2.8982
Iteration: 3096; Percent complete: 77.4%; Average loss: 2.8836
Iteration: 3097; Percent complete: 77.4%; Average loss: 2.8659
Iteration: 3098; Percent complete: 77.5%; Average loss: 2.9494
Iteration: 3099; Percent complete: 77.5%; Average loss: 2.9952
Iteration: 3100; Percent complete: 77.5%; Average loss: 3.0148
Iteration: 3101; Percent complete: 77.5%; Average loss: 2.9342
Iteration: 3102; Percent complete: 77.5%; Average loss: 2.8408
Iteration: 3103; Percent complete: 77.6%; Average loss: 2.9748
Iteration: 3104; Percent complete: 77.6%; Average loss: 2.9545
Iteration: 3105; Percent complete: 77.6%; Average loss: 2.9361
Iteration: 3106; Percent complete: 77.6%; Average loss: 2.7132
Iteration: 3107; Percent complete: 77.7%; Average loss: 2.9999
Iteration: 3108; Percent complete: 77.7%; Average loss: 2.8489
Iteration: 3109; Percent complete: 77.7%; Average loss: 2.8386
Iteration: 3110; Percent complete: 77.8%; Average loss: 3.2130
Iteration: 3111; Percent complete: 77.8%; Average loss: 3.1164
Iteration: 3112; Percent complete: 77.8%; Average loss: 2.6241
Iteration: 3113; Percent complete: 77.8%; Average loss: 2.9167
Iteration: 3114; Percent complete: 77.8%; Average loss: 3.1510
Iteration: 3115; Percent complete: 77.9%; Average loss: 2.6926
Iteration: 3116; Percent complete: 77.9%; Average loss: 2.7959
Iteration: 3117; Percent complete: 77.9%; Average loss: 2.9571
Iteration: 3118; Percent complete: 78.0%; Average loss: 2.8444
Iteration: 3119; Percent complete: 78.0%; Average loss: 2.8810
Iteration: 3120; Percent complete: 78.0%; Average loss: 2.7874
Iteration: 3121; Percent complete: 78.0%; Average loss: 2.5753
Iteration: 3122; Percent complete: 78.0%; Average loss: 2.8261
Iteration: 3123; Percent complete: 78.1%; Average loss: 2.6679
Iteration: 3124; Percent complete: 78.1%; Average loss: 3.1612
Iteration: 3125; Percent complete: 78.1%; Average loss: 2.7879
Iteration: 3126; Percent complete: 78.1%; Average loss: 2.7917
Iteration: 3127; Percent complete: 78.2%; Average loss: 2.8387
Iteration: 3128; Percent complete: 78.2%; Average loss: 3.0720
Iteration: 3129; Percent complete: 78.2%; Average loss: 2.8187
Iteration: 3130; Percent complete: 78.2%; Average loss: 2.8297
Iteration: 3131; Percent complete: 78.3%; Average loss: 3.0173
Iteration: 3132; Percent complete: 78.3%; Average loss: 2.7026
Iteration: 3133; Percent complete: 78.3%; Average loss: 2.6394
Iteration: 3134; Percent complete: 78.3%; Average loss: 2.7500
Iteration: 3135; Percent complete: 78.4%; Average loss: 2.8380
Iteration: 3136; Percent complete: 78.4%; Average loss: 2.8867
Iteration: 3137; Percent complete: 78.4%; Average loss: 2.9807
Iteration: 3138; Percent complete: 78.5%; Average loss: 2.7376
Iteration: 3139; Percent complete: 78.5%; Average loss: 2.4840
Iteration: 3140; Percent complete: 78.5%; Average loss: 2.8758
Iteration: 3141; Percent complete: 78.5%; Average loss: 2.8837
Iteration: 3142; Percent complete: 78.5%; Average loss: 3.0677
Iteration: 3143; Percent complete: 78.6%; Average loss: 3.0552
Iteration: 3144; Percent complete: 78.6%; Average loss: 2.4899
Iteration: 3145; Percent complete: 78.6%; Average loss: 2.7634
Iteration: 3146; Percent complete: 78.6%; Average loss: 3.0813
Iteration: 3147; Percent complete: 78.7%; Average loss: 2.8954
Iteration: 3148; Percent complete: 78.7%; Average loss: 2.7874
Iteration: 3149; Percent complete: 78.7%; Average loss: 2.6527
Iteration: 3150; Percent complete: 78.8%; Average loss: 2.9295
Iteration: 3151; Percent complete: 78.8%; Average loss: 2.7891
Iteration: 3152; Percent complete: 78.8%; Average loss: 2.9470
Iteration: 3153; Percent complete: 78.8%; Average loss: 2.7560
Iteration: 3154; Percent complete: 78.8%; Average loss: 3.0068
Iteration: 3155; Percent complete: 78.9%; Average loss: 2.9679
Iteration: 3156; Percent complete: 78.9%; Average loss: 2.7689
Iteration: 3157; Percent complete: 78.9%; Average loss: 2.8527
Iteration: 3158; Percent complete: 79.0%; Average loss: 2.6437
Iteration: 3159; Percent complete: 79.0%; Average loss: 2.8582
Iteration: 3160; Percent complete: 79.0%; Average loss: 2.8749
Iteration: 3161; Percent complete: 79.0%; Average loss: 2.8326
Iteration: 3162; Percent complete: 79.0%; Average loss: 2.9197
Iteration: 3163; Percent complete: 79.1%; Average loss: 2.9054
Iteration: 3164; Percent complete: 79.1%; Average loss: 2.6695
Iteration: 3165; Percent complete: 79.1%; Average loss: 3.1475
Iteration: 3166; Percent complete: 79.1%; Average loss: 2.7390
Iteration: 3167; Percent complete: 79.2%; Average loss: 2.8018
Iteration: 3168; Percent complete: 79.2%; Average loss: 2.8767
Iteration: 3169; Percent complete: 79.2%; Average loss: 2.8655
Iteration: 3170; Percent complete: 79.2%; Average loss: 2.9125
Iteration: 3171; Percent complete: 79.3%; Average loss: 2.8125
Iteration: 3172; Percent complete: 79.3%; Average loss: 2.9108
Iteration: 3173; Percent complete: 79.3%; Average loss: 3.0362
Iteration: 3174; Percent complete: 79.3%; Average loss: 2.9374
Iteration: 3175; Percent complete: 79.4%; Average loss: 2.8197
Iteration: 3176; Percent complete: 79.4%; Average loss: 2.7805
Iteration: 3177; Percent complete: 79.4%; Average loss: 2.8113
Iteration: 3178; Percent complete: 79.5%; Average loss: 3.0037
Iteration: 3179; Percent complete: 79.5%; Average loss: 2.7946
Iteration: 3180; Percent complete: 79.5%; Average loss: 2.7468
Iteration: 3181; Percent complete: 79.5%; Average loss: 2.8161
Iteration: 3182; Percent complete: 79.5%; Average loss: 2.9773
Iteration: 3183; Percent complete: 79.6%; Average loss: 2.8886
Iteration: 3184; Percent complete: 79.6%; Average loss: 3.0297
Iteration: 3185; Percent complete: 79.6%; Average loss: 2.8494
Iteration: 3186; Percent complete: 79.7%; Average loss: 2.8107
Iteration: 3187; Percent complete: 79.7%; Average loss: 2.9039
Iteration: 3188; Percent complete: 79.7%; Average loss: 2.8804
Iteration: 3189; Percent complete: 79.7%; Average loss: 2.7417
Iteration: 3190; Percent complete: 79.8%; Average loss: 2.8430
Iteration: 3191; Percent complete: 79.8%; Average loss: 2.9383
Iteration: 3192; Percent complete: 79.8%; Average loss: 2.8879
Iteration: 3193; Percent complete: 79.8%; Average loss: 2.9975
Iteration: 3194; Percent complete: 79.8%; Average loss: 2.6798
Iteration: 3195; Percent complete: 79.9%; Average loss: 2.8963
Iteration: 3196; Percent complete: 79.9%; Average loss: 2.7254
Iteration: 3197; Percent complete: 79.9%; Average loss: 2.7485
Iteration: 3198; Percent complete: 80.0%; Average loss: 2.8010
Iteration: 3199; Percent complete: 80.0%; Average loss: 2.7663
Iteration: 3200; Percent complete: 80.0%; Average loss: 2.8583
Iteration: 3201; Percent complete: 80.0%; Average loss: 2.7998
Iteration: 3202; Percent complete: 80.0%; Average loss: 2.5402
Iteration: 3203; Percent complete: 80.1%; Average loss: 2.9994
Iteration: 3204; Percent complete: 80.1%; Average loss: 2.7330
Iteration: 3205; Percent complete: 80.1%; Average loss: 2.8646
Iteration: 3206; Percent complete: 80.2%; Average loss: 2.7958
Iteration: 3207; Percent complete: 80.2%; Average loss: 3.3365
Iteration: 3208; Percent complete: 80.2%; Average loss: 3.0365
Iteration: 3209; Percent complete: 80.2%; Average loss: 2.9799
Iteration: 3210; Percent complete: 80.2%; Average loss: 2.9156
Iteration: 3211; Percent complete: 80.3%; Average loss: 2.8053
Iteration: 3212; Percent complete: 80.3%; Average loss: 2.7283
Iteration: 3213; Percent complete: 80.3%; Average loss: 2.7490
Iteration: 3214; Percent complete: 80.3%; Average loss: 2.7570
Iteration: 3215; Percent complete: 80.4%; Average loss: 2.8602
Iteration: 3216; Percent complete: 80.4%; Average loss: 2.8436
Iteration: 3217; Percent complete: 80.4%; Average loss: 2.8307
Iteration: 3218; Percent complete: 80.5%; Average loss: 2.8564
Iteration: 3219; Percent complete: 80.5%; Average loss: 2.7426
Iteration: 3220; Percent complete: 80.5%; Average loss: 2.7895
Iteration: 3221; Percent complete: 80.5%; Average loss: 3.0591
Iteration: 3222; Percent complete: 80.5%; Average loss: 2.7863
Iteration: 3223; Percent complete: 80.6%; Average loss: 2.9171
Iteration: 3224; Percent complete: 80.6%; Average loss: 2.9662
Iteration: 3225; Percent complete: 80.6%; Average loss: 2.7883
Iteration: 3226; Percent complete: 80.7%; Average loss: 2.7951
Iteration: 3227; Percent complete: 80.7%; Average loss: 2.7132
Iteration: 3228; Percent complete: 80.7%; Average loss: 3.1547
Iteration: 3229; Percent complete: 80.7%; Average loss: 2.8672
Iteration: 3230; Percent complete: 80.8%; Average loss: 2.6837
Iteration: 3231; Percent complete: 80.8%; Average loss: 2.9999
Iteration: 3232; Percent complete: 80.8%; Average loss: 2.8912
Iteration: 3233; Percent complete: 80.8%; Average loss: 3.0033
Iteration: 3234; Percent complete: 80.8%; Average loss: 2.7595
Iteration: 3235; Percent complete: 80.9%; Average loss: 2.8805
Iteration: 3236; Percent complete: 80.9%; Average loss: 2.6334
Iteration: 3237; Percent complete: 80.9%; Average loss: 2.8380
Iteration: 3238; Percent complete: 81.0%; Average loss: 2.8230
Iteration: 3239; Percent complete: 81.0%; Average loss: 2.9149
Iteration: 3240; Percent complete: 81.0%; Average loss: 2.6590
Iteration: 3241; Percent complete: 81.0%; Average loss: 2.7990
Iteration: 3242; Percent complete: 81.0%; Average loss: 2.7058
Iteration: 3243; Percent complete: 81.1%; Average loss: 2.5856
Iteration: 3244; Percent complete: 81.1%; Average loss: 2.7917
Iteration: 3245; Percent complete: 81.1%; Average loss: 2.8363
Iteration: 3246; Percent complete: 81.2%; Average loss: 2.8531
Iteration: 3247; Percent complete: 81.2%; Average loss: 2.7992
Iteration: 3248; Percent complete: 81.2%; Average loss: 2.7178
Iteration: 3249; Percent complete: 81.2%; Average loss: 2.9234
Iteration: 3250; Percent complete: 81.2%; Average loss: 2.7780
Iteration: 3251; Percent complete: 81.3%; Average loss: 2.7860
Iteration: 3252; Percent complete: 81.3%; Average loss: 2.7052
Iteration: 3253; Percent complete: 81.3%; Average loss: 2.8543
Iteration: 3254; Percent complete: 81.3%; Average loss: 2.6858
Iteration: 3255; Percent complete: 81.4%; Average loss: 2.8488
Iteration: 3256; Percent complete: 81.4%; Average loss: 3.1003
Iteration: 3257; Percent complete: 81.4%; Average loss: 2.9277
Iteration: 3258; Percent complete: 81.5%; Average loss: 2.8443
Iteration: 3259; Percent complete: 81.5%; Average loss: 2.6874
Iteration: 3260; Percent complete: 81.5%; Average loss: 3.1262
Iteration: 3261; Percent complete: 81.5%; Average loss: 2.9853
Iteration: 3262; Percent complete: 81.5%; Average loss: 2.7785
Iteration: 3263; Percent complete: 81.6%; Average loss: 2.7791
Iteration: 3264; Percent complete: 81.6%; Average loss: 2.8995
Iteration: 3265; Percent complete: 81.6%; Average loss: 3.0857
Iteration: 3266; Percent complete: 81.7%; Average loss: 2.9617
Iteration: 3267; Percent complete: 81.7%; Average loss: 2.6720
Iteration: 3268; Percent complete: 81.7%; Average loss: 2.7689
Iteration: 3269; Percent complete: 81.7%; Average loss: 2.7804
Iteration: 3270; Percent complete: 81.8%; Average loss: 2.6876
Iteration: 3271; Percent complete: 81.8%; Average loss: 2.7898
Iteration: 3272; Percent complete: 81.8%; Average loss: 2.8369
Iteration: 3273; Percent complete: 81.8%; Average loss: 3.0629
Iteration: 3274; Percent complete: 81.8%; Average loss: 2.6637
Iteration: 3275; Percent complete: 81.9%; Average loss: 2.8890
Iteration: 3276; Percent complete: 81.9%; Average loss: 2.8434
Iteration: 3277; Percent complete: 81.9%; Average loss: 2.6713
Iteration: 3278; Percent complete: 82.0%; Average loss: 3.0756
Iteration: 3279; Percent complete: 82.0%; Average loss: 2.6905
Iteration: 3280; Percent complete: 82.0%; Average loss: 2.8295
Iteration: 3281; Percent complete: 82.0%; Average loss: 2.6927
Iteration: 3282; Percent complete: 82.0%; Average loss: 2.9227
Iteration: 3283; Percent complete: 82.1%; Average loss: 2.6831
Iteration: 3284; Percent complete: 82.1%; Average loss: 2.9662
Iteration: 3285; Percent complete: 82.1%; Average loss: 2.7301
Iteration: 3286; Percent complete: 82.2%; Average loss: 2.8067
Iteration: 3287; Percent complete: 82.2%; Average loss: 2.8800
Iteration: 3288; Percent complete: 82.2%; Average loss: 2.8194
Iteration: 3289; Percent complete: 82.2%; Average loss: 2.7150
Iteration: 3290; Percent complete: 82.2%; Average loss: 2.8220
Iteration: 3291; Percent complete: 82.3%; Average loss: 2.9557
Iteration: 3292; Percent complete: 82.3%; Average loss: 2.6632
Iteration: 3293; Percent complete: 82.3%; Average loss: 2.8111
Iteration: 3294; Percent complete: 82.3%; Average loss: 2.8301
Iteration: 3295; Percent complete: 82.4%; Average loss: 2.9937
Iteration: 3296; Percent complete: 82.4%; Average loss: 2.8381
Iteration: 3297; Percent complete: 82.4%; Average loss: 2.9167
Iteration: 3298; Percent complete: 82.5%; Average loss: 2.8119
Iteration: 3299; Percent complete: 82.5%; Average loss: 2.8534
Iteration: 3300; Percent complete: 82.5%; Average loss: 2.9071
Iteration: 3301; Percent complete: 82.5%; Average loss: 3.1508
Iteration: 3302; Percent complete: 82.5%; Average loss: 2.6029
Iteration: 3303; Percent complete: 82.6%; Average loss: 2.5817
Iteration: 3304; Percent complete: 82.6%; Average loss: 2.9339
Iteration: 3305; Percent complete: 82.6%; Average loss: 2.3718
Iteration: 3306; Percent complete: 82.7%; Average loss: 3.1216
Iteration: 3307; Percent complete: 82.7%; Average loss: 2.7722
Iteration: 3308; Percent complete: 82.7%; Average loss: 2.7932
Iteration: 3309; Percent complete: 82.7%; Average loss: 2.7034
Iteration: 3310; Percent complete: 82.8%; Average loss: 2.9023
Iteration: 3311; Percent complete: 82.8%; Average loss: 2.5981
Iteration: 3312; Percent complete: 82.8%; Average loss: 2.4829
Iteration: 3313; Percent complete: 82.8%; Average loss: 2.6522
Iteration: 3314; Percent complete: 82.8%; Average loss: 2.7894
Iteration: 3315; Percent complete: 82.9%; Average loss: 2.7246
Iteration: 3316; Percent complete: 82.9%; Average loss: 2.7431
Iteration: 3317; Percent complete: 82.9%; Average loss: 2.5507
Iteration: 3318; Percent complete: 83.0%; Average loss: 2.6924
Iteration: 3319; Percent complete: 83.0%; Average loss: 2.7667
Iteration: 3320; Percent complete: 83.0%; Average loss: 2.7176
Iteration: 3321; Percent complete: 83.0%; Average loss: 2.9334
Iteration: 3322; Percent complete: 83.0%; Average loss: 2.9057
Iteration: 3323; Percent complete: 83.1%; Average loss: 2.5616
Iteration: 3324; Percent complete: 83.1%; Average loss: 2.6570
Iteration: 3325; Percent complete: 83.1%; Average loss: 2.8218
Iteration: 3326; Percent complete: 83.2%; Average loss: 2.8700
Iteration: 3327; Percent complete: 83.2%; Average loss: 2.9319
Iteration: 3328; Percent complete: 83.2%; Average loss: 2.6210
Iteration: 3329; Percent complete: 83.2%; Average loss: 2.7594
Iteration: 3330; Percent complete: 83.2%; Average loss: 2.9268
Iteration: 3331; Percent complete: 83.3%; Average loss: 2.8013
Iteration: 3332; Percent complete: 83.3%; Average loss: 3.0036
Iteration: 3333; Percent complete: 83.3%; Average loss: 2.8354
Iteration: 3334; Percent complete: 83.4%; Average loss: 2.6794
Iteration: 3335; Percent complete: 83.4%; Average loss: 2.7852
Iteration: 3336; Percent complete: 83.4%; Average loss: 2.8113
Iteration: 3337; Percent complete: 83.4%; Average loss: 2.7246
Iteration: 3338; Percent complete: 83.5%; Average loss: 2.7522
Iteration: 3339; Percent complete: 83.5%; Average loss: 3.1075
Iteration: 3340; Percent complete: 83.5%; Average loss: 2.9154
Iteration: 3341; Percent complete: 83.5%; Average loss: 2.7992
Iteration: 3342; Percent complete: 83.5%; Average loss: 2.8743
Iteration: 3343; Percent complete: 83.6%; Average loss: 2.6670
Iteration: 3344; Percent complete: 83.6%; Average loss: 2.6490
Iteration: 3345; Percent complete: 83.6%; Average loss: 2.7760
Iteration: 3346; Percent complete: 83.7%; Average loss: 2.9184
Iteration: 3347; Percent complete: 83.7%; Average loss: 2.7528
Iteration: 3348; Percent complete: 83.7%; Average loss: 2.4730
Iteration: 3349; Percent complete: 83.7%; Average loss: 2.8697
Iteration: 3350; Percent complete: 83.8%; Average loss: 2.7483
Iteration: 3351; Percent complete: 83.8%; Average loss: 2.8558
Iteration: 3352; Percent complete: 83.8%; Average loss: 2.7468
Iteration: 3353; Percent complete: 83.8%; Average loss: 2.9887
Iteration: 3354; Percent complete: 83.9%; Average loss: 2.7144
Iteration: 3355; Percent complete: 83.9%; Average loss: 2.7438
Iteration: 3356; Percent complete: 83.9%; Average loss: 2.7265
Iteration: 3357; Percent complete: 83.9%; Average loss: 2.6550
Iteration: 3358; Percent complete: 84.0%; Average loss: 2.8455
Iteration: 3359; Percent complete: 84.0%; Average loss: 3.0653
Iteration: 3360; Percent complete: 84.0%; Average loss: 2.7794
Iteration: 3361; Percent complete: 84.0%; Average loss: 2.6366
Iteration: 3362; Percent complete: 84.0%; Average loss: 2.8899
Iteration: 3363; Percent complete: 84.1%; Average loss: 2.6677
Iteration: 3364; Percent complete: 84.1%; Average loss: 3.0063
Iteration: 3365; Percent complete: 84.1%; Average loss: 2.8526
Iteration: 3366; Percent complete: 84.2%; Average loss: 2.8927
Iteration: 3367; Percent complete: 84.2%; Average loss: 2.9466
Iteration: 3368; Percent complete: 84.2%; Average loss: 2.8311
Iteration: 3369; Percent complete: 84.2%; Average loss: 3.0405
Iteration: 3370; Percent complete: 84.2%; Average loss: 2.6895
Iteration: 3371; Percent complete: 84.3%; Average loss: 2.8053
Iteration: 3372; Percent complete: 84.3%; Average loss: 3.0343
Iteration: 3373; Percent complete: 84.3%; Average loss: 3.0454
Iteration: 3374; Percent complete: 84.4%; Average loss: 2.7633
Iteration: 3375; Percent complete: 84.4%; Average loss: 2.8754
Iteration: 3376; Percent complete: 84.4%; Average loss: 2.8067
Iteration: 3377; Percent complete: 84.4%; Average loss: 3.0773
Iteration: 3378; Percent complete: 84.5%; Average loss: 2.7460
Iteration: 3379; Percent complete: 84.5%; Average loss: 2.5689
Iteration: 3380; Percent complete: 84.5%; Average loss: 3.0096
Iteration: 3381; Percent complete: 84.5%; Average loss: 2.6598
Iteration: 3382; Percent complete: 84.5%; Average loss: 2.7262
Iteration: 3383; Percent complete: 84.6%; Average loss: 2.8561
Iteration: 3384; Percent complete: 84.6%; Average loss: 2.9678
Iteration: 3385; Percent complete: 84.6%; Average loss: 2.5600
Iteration: 3386; Percent complete: 84.7%; Average loss: 2.8182
Iteration: 3387; Percent complete: 84.7%; Average loss: 2.8777
Iteration: 3388; Percent complete: 84.7%; Average loss: 2.7512
Iteration: 3389; Percent complete: 84.7%; Average loss: 2.8389
Iteration: 3390; Percent complete: 84.8%; Average loss: 2.5724
Iteration: 3391; Percent complete: 84.8%; Average loss: 2.8303
Iteration: 3392; Percent complete: 84.8%; Average loss: 2.7552
Iteration: 3393; Percent complete: 84.8%; Average loss: 2.5453
Iteration: 3394; Percent complete: 84.9%; Average loss: 2.8642
Iteration: 3395; Percent complete: 84.9%; Average loss: 2.8766
Iteration: 3396; Percent complete: 84.9%; Average loss: 2.7293
Iteration: 3397; Percent complete: 84.9%; Average loss: 2.7867
Iteration: 3398; Percent complete: 85.0%; Average loss: 2.5971
Iteration: 3399; Percent complete: 85.0%; Average loss: 2.8945
Iteration: 3400; Percent complete: 85.0%; Average loss: 2.6745
Iteration: 3401; Percent complete: 85.0%; Average loss: 2.7343
Iteration: 3402; Percent complete: 85.0%; Average loss: 3.0475
Iteration: 3403; Percent complete: 85.1%; Average loss: 2.7275
Iteration: 3404; Percent complete: 85.1%; Average loss: 2.6756
Iteration: 3405; Percent complete: 85.1%; Average loss: 3.1270
Iteration: 3406; Percent complete: 85.2%; Average loss: 2.7699
Iteration: 3407; Percent complete: 85.2%; Average loss: 2.6213
Iteration: 3408; Percent complete: 85.2%; Average loss: 2.7662
Iteration: 3409; Percent complete: 85.2%; Average loss: 2.6834
Iteration: 3410; Percent complete: 85.2%; Average loss: 2.8000
Iteration: 3411; Percent complete: 85.3%; Average loss: 2.6472
Iteration: 3412; Percent complete: 85.3%; Average loss: 2.7778
Iteration: 3413; Percent complete: 85.3%; Average loss: 2.7785
Iteration: 3414; Percent complete: 85.4%; Average loss: 2.6187
Iteration: 3415; Percent complete: 85.4%; Average loss: 2.9555
Iteration: 3416; Percent complete: 85.4%; Average loss: 2.4758
Iteration: 3417; Percent complete: 85.4%; Average loss: 2.8175
Iteration: 3418; Percent complete: 85.5%; Average loss: 2.6528
Iteration: 3419; Percent complete: 85.5%; Average loss: 2.7561
Iteration: 3420; Percent complete: 85.5%; Average loss: 2.6516
Iteration: 3421; Percent complete: 85.5%; Average loss: 2.6396
Iteration: 3422; Percent complete: 85.5%; Average loss: 2.7122
Iteration: 3423; Percent complete: 85.6%; Average loss: 2.7175
Iteration: 3424; Percent complete: 85.6%; Average loss: 2.9089
Iteration: 3425; Percent complete: 85.6%; Average loss: 2.7290
Iteration: 3426; Percent complete: 85.7%; Average loss: 2.7261
Iteration: 3427; Percent complete: 85.7%; Average loss: 2.7248
Iteration: 3428; Percent complete: 85.7%; Average loss: 2.7626
Iteration: 3429; Percent complete: 85.7%; Average loss: 2.6561
Iteration: 3430; Percent complete: 85.8%; Average loss: 2.6188
Iteration: 3431; Percent complete: 85.8%; Average loss: 2.9826
Iteration: 3432; Percent complete: 85.8%; Average loss: 2.8001
Iteration: 3433; Percent complete: 85.8%; Average loss: 2.8673
Iteration: 3434; Percent complete: 85.9%; Average loss: 2.8389
Iteration: 3435; Percent complete: 85.9%; Average loss: 2.6278
Iteration: 3436; Percent complete: 85.9%; Average loss: 2.8829
Iteration: 3437; Percent complete: 85.9%; Average loss: 2.9607
Iteration: 3438; Percent complete: 86.0%; Average loss: 2.7004
Iteration: 3439; Percent complete: 86.0%; Average loss: 2.6992
Iteration: 3440; Percent complete: 86.0%; Average loss: 2.8382
Iteration: 3441; Percent complete: 86.0%; Average loss: 2.7450
Iteration: 3442; Percent complete: 86.1%; Average loss: 2.5566
Iteration: 3443; Percent complete: 86.1%; Average loss: 2.6566
Iteration: 3444; Percent complete: 86.1%; Average loss: 3.0172
Iteration: 3445; Percent complete: 86.1%; Average loss: 2.8428
Iteration: 3446; Percent complete: 86.2%; Average loss: 2.7933
Iteration: 3447; Percent complete: 86.2%; Average loss: 2.9195
Iteration: 3448; Percent complete: 86.2%; Average loss: 2.7146
Iteration: 3449; Percent complete: 86.2%; Average loss: 2.7246
Iteration: 3450; Percent complete: 86.2%; Average loss: 2.8549
Iteration: 3451; Percent complete: 86.3%; Average loss: 3.0109
Iteration: 3452; Percent complete: 86.3%; Average loss: 2.5265
Iteration: 3453; Percent complete: 86.3%; Average loss: 2.6842
Iteration: 3454; Percent complete: 86.4%; Average loss: 2.5825
Iteration: 3455; Percent complete: 86.4%; Average loss: 2.6451
Iteration: 3456; Percent complete: 86.4%; Average loss: 2.5736
Iteration: 3457; Percent complete: 86.4%; Average loss: 2.6173
Iteration: 3458; Percent complete: 86.5%; Average loss: 2.9716
Iteration: 3459; Percent complete: 86.5%; Average loss: 2.5097
Iteration: 3460; Percent complete: 86.5%; Average loss: 2.6165
Iteration: 3461; Percent complete: 86.5%; Average loss: 2.7687
Iteration: 3462; Percent complete: 86.6%; Average loss: 2.8299
Iteration: 3463; Percent complete: 86.6%; Average loss: 2.7898
Iteration: 3464; Percent complete: 86.6%; Average loss: 2.6263
Iteration: 3465; Percent complete: 86.6%; Average loss: 2.7803
Iteration: 3466; Percent complete: 86.7%; Average loss: 2.9045
Iteration: 3467; Percent complete: 86.7%; Average loss: 2.7840
Iteration: 3468; Percent complete: 86.7%; Average loss: 2.6989
Iteration: 3469; Percent complete: 86.7%; Average loss: 2.4825
Iteration: 3470; Percent complete: 86.8%; Average loss: 2.8671
Iteration: 3471; Percent complete: 86.8%; Average loss: 2.8817
Iteration: 3472; Percent complete: 86.8%; Average loss: 2.6183
Iteration: 3473; Percent complete: 86.8%; Average loss: 2.9051
Iteration: 3474; Percent complete: 86.9%; Average loss: 2.6117
Iteration: 3475; Percent complete: 86.9%; Average loss: 2.4791
Iteration: 3476; Percent complete: 86.9%; Average loss: 2.7763
Iteration: 3477; Percent complete: 86.9%; Average loss: 2.7533
Iteration: 3478; Percent complete: 87.0%; Average loss: 2.8679
Iteration: 3479; Percent complete: 87.0%; Average loss: 2.7282
Iteration: 3480; Percent complete: 87.0%; Average loss: 2.7651
Iteration: 3481; Percent complete: 87.0%; Average loss: 2.8369
Iteration: 3482; Percent complete: 87.1%; Average loss: 2.6490
Iteration: 3483; Percent complete: 87.1%; Average loss: 2.6662
Iteration: 3484; Percent complete: 87.1%; Average loss: 2.6325
Iteration: 3485; Percent complete: 87.1%; Average loss: 2.8317
Iteration: 3486; Percent complete: 87.2%; Average loss: 2.8270
Iteration: 3487; Percent complete: 87.2%; Average loss: 2.8949
Iteration: 3488; Percent complete: 87.2%; Average loss: 2.6505
Iteration: 3489; Percent complete: 87.2%; Average loss: 2.9399
Iteration: 3490; Percent complete: 87.2%; Average loss: 2.8460
Iteration: 3491; Percent complete: 87.3%; Average loss: 2.5952
Iteration: 3492; Percent complete: 87.3%; Average loss: 2.6548
Iteration: 3493; Percent complete: 87.3%; Average loss: 2.4498
Iteration: 3494; Percent complete: 87.4%; Average loss: 2.8597
Iteration: 3495; Percent complete: 87.4%; Average loss: 2.8833
Iteration: 3496; Percent complete: 87.4%; Average loss: 2.9354
Iteration: 3497; Percent complete: 87.4%; Average loss: 2.5506
Iteration: 3498; Percent complete: 87.5%; Average loss: 2.9888
Iteration: 3499; Percent complete: 87.5%; Average loss: 2.7491
Iteration: 3500; Percent complete: 87.5%; Average loss: 2.6098
Iteration: 3501; Percent complete: 87.5%; Average loss: 2.7484
Iteration: 3502; Percent complete: 87.5%; Average loss: 2.7583
Iteration: 3503; Percent complete: 87.6%; Average loss: 2.9259
Iteration: 3504; Percent complete: 87.6%; Average loss: 2.7101
Iteration: 3505; Percent complete: 87.6%; Average loss: 3.0304
Iteration: 3506; Percent complete: 87.6%; Average loss: 2.9223
Iteration: 3507; Percent complete: 87.7%; Average loss: 2.8235
Iteration: 3508; Percent complete: 87.7%; Average loss: 2.6244
Iteration: 3509; Percent complete: 87.7%; Average loss: 3.0151
Iteration: 3510; Percent complete: 87.8%; Average loss: 2.7285
Iteration: 3511; Percent complete: 87.8%; Average loss: 2.6320
Iteration: 3512; Percent complete: 87.8%; Average loss: 2.5895
Iteration: 3513; Percent complete: 87.8%; Average loss: 2.7497
Iteration: 3514; Percent complete: 87.8%; Average loss: 2.8779
Iteration: 3515; Percent complete: 87.9%; Average loss: 2.8297
Iteration: 3516; Percent complete: 87.9%; Average loss: 2.6783
Iteration: 3517; Percent complete: 87.9%; Average loss: 2.7282
Iteration: 3518; Percent complete: 87.9%; Average loss: 2.6331
Iteration: 3519; Percent complete: 88.0%; Average loss: 2.7667
Iteration: 3520; Percent complete: 88.0%; Average loss: 2.7042
Iteration: 3521; Percent complete: 88.0%; Average loss: 2.9408
Iteration: 3522; Percent complete: 88.0%; Average loss: 2.6549
Iteration: 3523; Percent complete: 88.1%; Average loss: 2.7884
Iteration: 3524; Percent complete: 88.1%; Average loss: 2.7823
Iteration: 3525; Percent complete: 88.1%; Average loss: 2.6718
Iteration: 3526; Percent complete: 88.1%; Average loss: 3.0330
Iteration: 3527; Percent complete: 88.2%; Average loss: 2.6626
Iteration: 3528; Percent complete: 88.2%; Average loss: 2.8354
Iteration: 3529; Percent complete: 88.2%; Average loss: 2.6018
Iteration: 3530; Percent complete: 88.2%; Average loss: 2.4260
Iteration: 3531; Percent complete: 88.3%; Average loss: 2.7861
Iteration: 3532; Percent complete: 88.3%; Average loss: 2.5388
Iteration: 3533; Percent complete: 88.3%; Average loss: 2.7406
Iteration: 3534; Percent complete: 88.3%; Average loss: 2.7148
Iteration: 3535; Percent complete: 88.4%; Average loss: 2.8803
Iteration: 3536; Percent complete: 88.4%; Average loss: 2.8175
Iteration: 3537; Percent complete: 88.4%; Average loss: 2.7970
Iteration: 3538; Percent complete: 88.4%; Average loss: 2.7688
Iteration: 3539; Percent complete: 88.5%; Average loss: 2.8850
Iteration: 3540; Percent complete: 88.5%; Average loss: 2.8965
Iteration: 3541; Percent complete: 88.5%; Average loss: 2.7286
Iteration: 3542; Percent complete: 88.5%; Average loss: 2.7962
Iteration: 3543; Percent complete: 88.6%; Average loss: 2.7983
Iteration: 3544; Percent complete: 88.6%; Average loss: 2.7767
Iteration: 3545; Percent complete: 88.6%; Average loss: 2.7890
Iteration: 3546; Percent complete: 88.6%; Average loss: 2.8177
Iteration: 3547; Percent complete: 88.7%; Average loss: 2.5810
Iteration: 3548; Percent complete: 88.7%; Average loss: 2.8080
Iteration: 3549; Percent complete: 88.7%; Average loss: 2.8098
Iteration: 3550; Percent complete: 88.8%; Average loss: 2.7285
Iteration: 3551; Percent complete: 88.8%; Average loss: 2.6452
Iteration: 3552; Percent complete: 88.8%; Average loss: 2.6301
Iteration: 3553; Percent complete: 88.8%; Average loss: 2.9311
Iteration: 3554; Percent complete: 88.8%; Average loss: 2.7128
Iteration: 3555; Percent complete: 88.9%; Average loss: 2.9755
Iteration: 3556; Percent complete: 88.9%; Average loss: 2.7340
Iteration: 3557; Percent complete: 88.9%; Average loss: 2.6253
Iteration: 3558; Percent complete: 88.9%; Average loss: 2.5933
Iteration: 3559; Percent complete: 89.0%; Average loss: 2.7707
Iteration: 3560; Percent complete: 89.0%; Average loss: 2.6808
Iteration: 3561; Percent complete: 89.0%; Average loss: 2.6710
Iteration: 3562; Percent complete: 89.0%; Average loss: 2.8342
Iteration: 3563; Percent complete: 89.1%; Average loss: 2.7183
Iteration: 3564; Percent complete: 89.1%; Average loss: 2.6335
Iteration: 3565; Percent complete: 89.1%; Average loss: 3.0404
Iteration: 3566; Percent complete: 89.1%; Average loss: 2.5259
Iteration: 3567; Percent complete: 89.2%; Average loss: 2.5919
Iteration: 3568; Percent complete: 89.2%; Average loss: 2.7164
Iteration: 3569; Percent complete: 89.2%; Average loss: 2.6692
Iteration: 3570; Percent complete: 89.2%; Average loss: 2.6298
Iteration: 3571; Percent complete: 89.3%; Average loss: 2.7264
Iteration: 3572; Percent complete: 89.3%; Average loss: 2.6792
Iteration: 3573; Percent complete: 89.3%; Average loss: 2.7538
Iteration: 3574; Percent complete: 89.3%; Average loss: 2.5564
Iteration: 3575; Percent complete: 89.4%; Average loss: 2.9950
Iteration: 3576; Percent complete: 89.4%; Average loss: 2.5450
Iteration: 3577; Percent complete: 89.4%; Average loss: 2.7532
Iteration: 3578; Percent complete: 89.5%; Average loss: 2.8368
Iteration: 3579; Percent complete: 89.5%; Average loss: 2.7717
Iteration: 3580; Percent complete: 89.5%; Average loss: 2.6891
Iteration: 3581; Percent complete: 89.5%; Average loss: 2.6899
Iteration: 3582; Percent complete: 89.5%; Average loss: 2.7133
Iteration: 3583; Percent complete: 89.6%; Average loss: 2.6759
Iteration: 3584; Percent complete: 89.6%; Average loss: 3.1654
Iteration: 3585; Percent complete: 89.6%; Average loss: 2.8263
Iteration: 3586; Percent complete: 89.6%; Average loss: 2.5629
Iteration: 3587; Percent complete: 89.7%; Average loss: 2.8451
Iteration: 3588; Percent complete: 89.7%; Average loss: 2.7120
Iteration: 3589; Percent complete: 89.7%; Average loss: 2.6822
Iteration: 3590; Percent complete: 89.8%; Average loss: 2.6553
Iteration: 3591; Percent complete: 89.8%; Average loss: 2.5816
Iteration: 3592; Percent complete: 89.8%; Average loss: 2.5425
Iteration: 3593; Percent complete: 89.8%; Average loss: 2.6392
Iteration: 3594; Percent complete: 89.8%; Average loss: 2.9046
Iteration: 3595; Percent complete: 89.9%; Average loss: 2.5345
Iteration: 3596; Percent complete: 89.9%; Average loss: 2.7180
Iteration: 3597; Percent complete: 89.9%; Average loss: 2.5846
Iteration: 3598; Percent complete: 90.0%; Average loss: 2.5512
Iteration: 3599; Percent complete: 90.0%; Average loss: 2.7010
Iteration: 3600; Percent complete: 90.0%; Average loss: 3.0636
Iteration: 3601; Percent complete: 90.0%; Average loss: 2.5631
Iteration: 3602; Percent complete: 90.0%; Average loss: 2.6332
Iteration: 3603; Percent complete: 90.1%; Average loss: 2.6881
Iteration: 3604; Percent complete: 90.1%; Average loss: 2.4466
Iteration: 3605; Percent complete: 90.1%; Average loss: 2.7335
Iteration: 3606; Percent complete: 90.1%; Average loss: 2.8093
Iteration: 3607; Percent complete: 90.2%; Average loss: 2.5595
Iteration: 3608; Percent complete: 90.2%; Average loss: 2.6660
Iteration: 3609; Percent complete: 90.2%; Average loss: 2.8605
Iteration: 3610; Percent complete: 90.2%; Average loss: 2.8959
Iteration: 3611; Percent complete: 90.3%; Average loss: 2.7033
Iteration: 3612; Percent complete: 90.3%; Average loss: 2.7288
Iteration: 3613; Percent complete: 90.3%; Average loss: 2.6413
Iteration: 3614; Percent complete: 90.3%; Average loss: 2.7965
Iteration: 3615; Percent complete: 90.4%; Average loss: 2.6942
Iteration: 3616; Percent complete: 90.4%; Average loss: 2.7668
Iteration: 3617; Percent complete: 90.4%; Average loss: 2.5752
Iteration: 3618; Percent complete: 90.5%; Average loss: 2.6659
Iteration: 3619; Percent complete: 90.5%; Average loss: 2.6933
Iteration: 3620; Percent complete: 90.5%; Average loss: 2.5611
Iteration: 3621; Percent complete: 90.5%; Average loss: 3.0648
Iteration: 3622; Percent complete: 90.5%; Average loss: 2.7047
Iteration: 3623; Percent complete: 90.6%; Average loss: 2.9099
Iteration: 3624; Percent complete: 90.6%; Average loss: 2.6764
Iteration: 3625; Percent complete: 90.6%; Average loss: 2.8216
Iteration: 3626; Percent complete: 90.6%; Average loss: 2.7113
Iteration: 3627; Percent complete: 90.7%; Average loss: 2.7721
Iteration: 3628; Percent complete: 90.7%; Average loss: 2.8352
Iteration: 3629; Percent complete: 90.7%; Average loss: 2.8469
Iteration: 3630; Percent complete: 90.8%; Average loss: 2.6003
Iteration: 3631; Percent complete: 90.8%; Average loss: 2.7505
Iteration: 3632; Percent complete: 90.8%; Average loss: 2.7533
Iteration: 3633; Percent complete: 90.8%; Average loss: 2.7271
Iteration: 3634; Percent complete: 90.8%; Average loss: 2.6912
Iteration: 3635; Percent complete: 90.9%; Average loss: 2.5957
Iteration: 3636; Percent complete: 90.9%; Average loss: 2.6553
Iteration: 3637; Percent complete: 90.9%; Average loss: 2.8763
Iteration: 3638; Percent complete: 91.0%; Average loss: 2.7620
Iteration: 3639; Percent complete: 91.0%; Average loss: 2.7566
Iteration: 3640; Percent complete: 91.0%; Average loss: 2.7553
Iteration: 3641; Percent complete: 91.0%; Average loss: 2.7168
Iteration: 3642; Percent complete: 91.0%; Average loss: 2.5739
Iteration: 3643; Percent complete: 91.1%; Average loss: 2.8317
Iteration: 3644; Percent complete: 91.1%; Average loss: 2.6006
Iteration: 3645; Percent complete: 91.1%; Average loss: 2.7783
Iteration: 3646; Percent complete: 91.1%; Average loss: 2.8809
Iteration: 3647; Percent complete: 91.2%; Average loss: 2.8345
Iteration: 3648; Percent complete: 91.2%; Average loss: 2.7313
Iteration: 3649; Percent complete: 91.2%; Average loss: 2.4553
Iteration: 3650; Percent complete: 91.2%; Average loss: 2.7084
Iteration: 3651; Percent complete: 91.3%; Average loss: 2.9541
Iteration: 3652; Percent complete: 91.3%; Average loss: 2.4919
Iteration: 3653; Percent complete: 91.3%; Average loss: 2.7910
Iteration: 3654; Percent complete: 91.3%; Average loss: 2.6556
Iteration: 3655; Percent complete: 91.4%; Average loss: 2.8182
Iteration: 3656; Percent complete: 91.4%; Average loss: 2.8210
Iteration: 3657; Percent complete: 91.4%; Average loss: 2.7806
Iteration: 3658; Percent complete: 91.5%; Average loss: 2.7353
Iteration: 3659; Percent complete: 91.5%; Average loss: 2.5814
Iteration: 3660; Percent complete: 91.5%; Average loss: 2.7366
Iteration: 3661; Percent complete: 91.5%; Average loss: 2.4612
Iteration: 3662; Percent complete: 91.5%; Average loss: 2.7222
Iteration: 3663; Percent complete: 91.6%; Average loss: 2.7802
Iteration: 3664; Percent complete: 91.6%; Average loss: 2.5618
Iteration: 3665; Percent complete: 91.6%; Average loss: 2.5207
Iteration: 3666; Percent complete: 91.6%; Average loss: 2.9349
Iteration: 3667; Percent complete: 91.7%; Average loss: 2.7211
Iteration: 3668; Percent complete: 91.7%; Average loss: 2.9431
Iteration: 3669; Percent complete: 91.7%; Average loss: 2.7110
Iteration: 3670; Percent complete: 91.8%; Average loss: 2.8495
Iteration: 3671; Percent complete: 91.8%; Average loss: 2.5534
Iteration: 3672; Percent complete: 91.8%; Average loss: 2.6418
Iteration: 3673; Percent complete: 91.8%; Average loss: 2.7975
Iteration: 3674; Percent complete: 91.8%; Average loss: 2.4806
Iteration: 3675; Percent complete: 91.9%; Average loss: 2.8649
Iteration: 3676; Percent complete: 91.9%; Average loss: 2.8258
Iteration: 3677; Percent complete: 91.9%; Average loss: 2.6447
Iteration: 3678; Percent complete: 92.0%; Average loss: 2.7269
Iteration: 3679; Percent complete: 92.0%; Average loss: 2.6160
Iteration: 3680; Percent complete: 92.0%; Average loss: 2.7813
Iteration: 3681; Percent complete: 92.0%; Average loss: 2.7638
Iteration: 3682; Percent complete: 92.0%; Average loss: 2.9787
Iteration: 3683; Percent complete: 92.1%; Average loss: 2.5334
Iteration: 3684; Percent complete: 92.1%; Average loss: 2.7309
Iteration: 3685; Percent complete: 92.1%; Average loss: 2.7943
Iteration: 3686; Percent complete: 92.2%; Average loss: 2.7994
Iteration: 3687; Percent complete: 92.2%; Average loss: 2.5590
Iteration: 3688; Percent complete: 92.2%; Average loss: 2.7950
Iteration: 3689; Percent complete: 92.2%; Average loss: 2.6309
Iteration: 3690; Percent complete: 92.2%; Average loss: 2.6755
Iteration: 3691; Percent complete: 92.3%; Average loss: 2.5779
Iteration: 3692; Percent complete: 92.3%; Average loss: 2.6282
Iteration: 3693; Percent complete: 92.3%; Average loss: 2.7854
Iteration: 3694; Percent complete: 92.3%; Average loss: 2.7399
Iteration: 3695; Percent complete: 92.4%; Average loss: 2.6267
Iteration: 3696; Percent complete: 92.4%; Average loss: 2.7961
Iteration: 3697; Percent complete: 92.4%; Average loss: 2.6914
Iteration: 3698; Percent complete: 92.5%; Average loss: 2.7563
Iteration: 3699; Percent complete: 92.5%; Average loss: 2.8554
Iteration: 3700; Percent complete: 92.5%; Average loss: 2.8392
Iteration: 3701; Percent complete: 92.5%; Average loss: 2.8365
Iteration: 3702; Percent complete: 92.5%; Average loss: 2.8429
Iteration: 3703; Percent complete: 92.6%; Average loss: 2.6667
Iteration: 3704; Percent complete: 92.6%; Average loss: 2.5894
Iteration: 3705; Percent complete: 92.6%; Average loss: 2.8826
Iteration: 3706; Percent complete: 92.7%; Average loss: 2.6455
Iteration: 3707; Percent complete: 92.7%; Average loss: 2.9721
Iteration: 3708; Percent complete: 92.7%; Average loss: 2.7460
Iteration: 3709; Percent complete: 92.7%; Average loss: 2.6496
Iteration: 3710; Percent complete: 92.8%; Average loss: 2.6939
Iteration: 3711; Percent complete: 92.8%; Average loss: 2.5466
Iteration: 3712; Percent complete: 92.8%; Average loss: 2.8015
Iteration: 3713; Percent complete: 92.8%; Average loss: 2.9051
Iteration: 3714; Percent complete: 92.8%; Average loss: 2.6275
Iteration: 3715; Percent complete: 92.9%; Average loss: 2.5996
Iteration: 3716; Percent complete: 92.9%; Average loss: 2.6629
Iteration: 3717; Percent complete: 92.9%; Average loss: 2.6437
Iteration: 3718; Percent complete: 93.0%; Average loss: 2.8600
Iteration: 3719; Percent complete: 93.0%; Average loss: 2.6187
Iteration: 3720; Percent complete: 93.0%; Average loss: 2.8413
Iteration: 3721; Percent complete: 93.0%; Average loss: 2.6694
Iteration: 3722; Percent complete: 93.0%; Average loss: 2.6913
Iteration: 3723; Percent complete: 93.1%; Average loss: 2.9676
Iteration: 3724; Percent complete: 93.1%; Average loss: 2.7764
Iteration: 3725; Percent complete: 93.1%; Average loss: 2.7188
Iteration: 3726; Percent complete: 93.2%; Average loss: 2.5788
Iteration: 3727; Percent complete: 93.2%; Average loss: 2.6990
Iteration: 3728; Percent complete: 93.2%; Average loss: 2.6627
Iteration: 3729; Percent complete: 93.2%; Average loss: 2.5782
Iteration: 3730; Percent complete: 93.2%; Average loss: 2.8155
Iteration: 3731; Percent complete: 93.3%; Average loss: 2.7005
Iteration: 3732; Percent complete: 93.3%; Average loss: 2.7010
Iteration: 3733; Percent complete: 93.3%; Average loss: 2.6195
Iteration: 3734; Percent complete: 93.3%; Average loss: 2.5604
Iteration: 3735; Percent complete: 93.4%; Average loss: 2.7015
Iteration: 3736; Percent complete: 93.4%; Average loss: 2.7595
Iteration: 3737; Percent complete: 93.4%; Average loss: 2.8485
Iteration: 3738; Percent complete: 93.5%; Average loss: 2.7185
Iteration: 3739; Percent complete: 93.5%; Average loss: 2.6930
Iteration: 3740; Percent complete: 93.5%; Average loss: 2.7103
Iteration: 3741; Percent complete: 93.5%; Average loss: 2.8657
Iteration: 3742; Percent complete: 93.5%; Average loss: 2.6119
Iteration: 3743; Percent complete: 93.6%; Average loss: 2.8607
Iteration: 3744; Percent complete: 93.6%; Average loss: 2.5800
Iteration: 3745; Percent complete: 93.6%; Average loss: 2.8363
Iteration: 3746; Percent complete: 93.7%; Average loss: 2.5875
Iteration: 3747; Percent complete: 93.7%; Average loss: 2.5752
Iteration: 3748; Percent complete: 93.7%; Average loss: 2.6814
Iteration: 3749; Percent complete: 93.7%; Average loss: 2.7077
Iteration: 3750; Percent complete: 93.8%; Average loss: 2.5988
Iteration: 3751; Percent complete: 93.8%; Average loss: 2.5683
Iteration: 3752; Percent complete: 93.8%; Average loss: 2.7710
Iteration: 3753; Percent complete: 93.8%; Average loss: 2.6249
Iteration: 3754; Percent complete: 93.8%; Average loss: 2.6297
Iteration: 3755; Percent complete: 93.9%; Average loss: 2.5990
Iteration: 3756; Percent complete: 93.9%; Average loss: 2.5091
Iteration: 3757; Percent complete: 93.9%; Average loss: 2.9211
Iteration: 3758; Percent complete: 94.0%; Average loss: 2.6649
Iteration: 3759; Percent complete: 94.0%; Average loss: 2.6442
Iteration: 3760; Percent complete: 94.0%; Average loss: 2.7243
Iteration: 3761; Percent complete: 94.0%; Average loss: 2.6590
Iteration: 3762; Percent complete: 94.0%; Average loss: 2.5557
Iteration: 3763; Percent complete: 94.1%; Average loss: 2.7739
Iteration: 3764; Percent complete: 94.1%; Average loss: 2.7095
Iteration: 3765; Percent complete: 94.1%; Average loss: 2.6305
Iteration: 3766; Percent complete: 94.2%; Average loss: 2.8458
Iteration: 3767; Percent complete: 94.2%; Average loss: 2.6613
Iteration: 3768; Percent complete: 94.2%; Average loss: 2.7017
Iteration: 3769; Percent complete: 94.2%; Average loss: 2.6428
Iteration: 3770; Percent complete: 94.2%; Average loss: 2.6637
Iteration: 3771; Percent complete: 94.3%; Average loss: 2.5937
Iteration: 3772; Percent complete: 94.3%; Average loss: 2.4131
Iteration: 3773; Percent complete: 94.3%; Average loss: 2.8645
Iteration: 3774; Percent complete: 94.3%; Average loss: 2.7898
Iteration: 3775; Percent complete: 94.4%; Average loss: 2.7606
Iteration: 3776; Percent complete: 94.4%; Average loss: 2.7026
Iteration: 3777; Percent complete: 94.4%; Average loss: 2.5451
Iteration: 3778; Percent complete: 94.5%; Average loss: 2.7570
Iteration: 3779; Percent complete: 94.5%; Average loss: 2.5045
Iteration: 3780; Percent complete: 94.5%; Average loss: 2.6421
Iteration: 3781; Percent complete: 94.5%; Average loss: 2.8060
Iteration: 3782; Percent complete: 94.5%; Average loss: 2.6214
Iteration: 3783; Percent complete: 94.6%; Average loss: 2.7661
Iteration: 3784; Percent complete: 94.6%; Average loss: 2.5366
Iteration: 3785; Percent complete: 94.6%; Average loss: 2.8299
Iteration: 3786; Percent complete: 94.7%; Average loss: 2.7520
Iteration: 3787; Percent complete: 94.7%; Average loss: 2.5595
Iteration: 3788; Percent complete: 94.7%; Average loss: 2.8262
Iteration: 3789; Percent complete: 94.7%; Average loss: 2.7692
Iteration: 3790; Percent complete: 94.8%; Average loss: 2.6270
Iteration: 3791; Percent complete: 94.8%; Average loss: 2.7382
Iteration: 3792; Percent complete: 94.8%; Average loss: 2.5666
Iteration: 3793; Percent complete: 94.8%; Average loss: 2.4817
Iteration: 3794; Percent complete: 94.8%; Average loss: 2.7184
Iteration: 3795; Percent complete: 94.9%; Average loss: 2.8149
Iteration: 3796; Percent complete: 94.9%; Average loss: 2.8622
Iteration: 3797; Percent complete: 94.9%; Average loss: 2.7035
Iteration: 3798; Percent complete: 95.0%; Average loss: 2.8207
Iteration: 3799; Percent complete: 95.0%; Average loss: 2.8682
Iteration: 3800; Percent complete: 95.0%; Average loss: 2.8237
Iteration: 3801; Percent complete: 95.0%; Average loss: 2.6842
Iteration: 3802; Percent complete: 95.0%; Average loss: 2.6905
Iteration: 3803; Percent complete: 95.1%; Average loss: 2.5934
Iteration: 3804; Percent complete: 95.1%; Average loss: 2.5199
Iteration: 3805; Percent complete: 95.1%; Average loss: 2.6770
Iteration: 3806; Percent complete: 95.2%; Average loss: 3.0834
Iteration: 3807; Percent complete: 95.2%; Average loss: 2.8305
Iteration: 3808; Percent complete: 95.2%; Average loss: 2.6028
Iteration: 3809; Percent complete: 95.2%; Average loss: 2.4803
Iteration: 3810; Percent complete: 95.2%; Average loss: 2.4066
Iteration: 3811; Percent complete: 95.3%; Average loss: 2.5371
Iteration: 3812; Percent complete: 95.3%; Average loss: 2.5754
Iteration: 3813; Percent complete: 95.3%; Average loss: 2.5007
Iteration: 3814; Percent complete: 95.3%; Average loss: 2.6370
Iteration: 3815; Percent complete: 95.4%; Average loss: 3.0007
Iteration: 3816; Percent complete: 95.4%; Average loss: 2.6367
Iteration: 3817; Percent complete: 95.4%; Average loss: 2.6426
Iteration: 3818; Percent complete: 95.5%; Average loss: 2.5467
Iteration: 3819; Percent complete: 95.5%; Average loss: 2.5892
Iteration: 3820; Percent complete: 95.5%; Average loss: 2.7300
Iteration: 3821; Percent complete: 95.5%; Average loss: 2.6276
Iteration: 3822; Percent complete: 95.5%; Average loss: 2.7444
Iteration: 3823; Percent complete: 95.6%; Average loss: 2.5706
Iteration: 3824; Percent complete: 95.6%; Average loss: 2.6866
Iteration: 3825; Percent complete: 95.6%; Average loss: 2.7698
Iteration: 3826; Percent complete: 95.7%; Average loss: 2.8578
Iteration: 3827; Percent complete: 95.7%; Average loss: 2.7447
Iteration: 3828; Percent complete: 95.7%; Average loss: 2.4911
Iteration: 3829; Percent complete: 95.7%; Average loss: 2.5625
Iteration: 3830; Percent complete: 95.8%; Average loss: 2.6686
Iteration: 3831; Percent complete: 95.8%; Average loss: 2.7364
Iteration: 3832; Percent complete: 95.8%; Average loss: 2.7239
Iteration: 3833; Percent complete: 95.8%; Average loss: 2.7683
Iteration: 3834; Percent complete: 95.9%; Average loss: 2.5581
Iteration: 3835; Percent complete: 95.9%; Average loss: 2.7130
Iteration: 3836; Percent complete: 95.9%; Average loss: 2.7972
Iteration: 3837; Percent complete: 95.9%; Average loss: 2.5828
Iteration: 3838; Percent complete: 96.0%; Average loss: 2.5633
Iteration: 3839; Percent complete: 96.0%; Average loss: 2.4261
Iteration: 3840; Percent complete: 96.0%; Average loss: 2.9840
Iteration: 3841; Percent complete: 96.0%; Average loss: 2.4329
Iteration: 3842; Percent complete: 96.0%; Average loss: 2.6928
Iteration: 3843; Percent complete: 96.1%; Average loss: 2.5364
Iteration: 3844; Percent complete: 96.1%; Average loss: 2.6908
Iteration: 3845; Percent complete: 96.1%; Average loss: 2.9373
Iteration: 3846; Percent complete: 96.2%; Average loss: 2.6543
Iteration: 3847; Percent complete: 96.2%; Average loss: 2.4652
Iteration: 3848; Percent complete: 96.2%; Average loss: 2.8194
Iteration: 3849; Percent complete: 96.2%; Average loss: 2.5504
Iteration: 3850; Percent complete: 96.2%; Average loss: 2.7144
Iteration: 3851; Percent complete: 96.3%; Average loss: 2.5757
Iteration: 3852; Percent complete: 96.3%; Average loss: 2.5649
Iteration: 3853; Percent complete: 96.3%; Average loss: 2.8861
Iteration: 3854; Percent complete: 96.4%; Average loss: 2.7625
Iteration: 3855; Percent complete: 96.4%; Average loss: 2.4959
Iteration: 3856; Percent complete: 96.4%; Average loss: 2.5341
Iteration: 3857; Percent complete: 96.4%; Average loss: 2.7399
Iteration: 3858; Percent complete: 96.5%; Average loss: 2.8401
Iteration: 3859; Percent complete: 96.5%; Average loss: 2.7785
Iteration: 3860; Percent complete: 96.5%; Average loss: 2.3554
Iteration: 3861; Percent complete: 96.5%; Average loss: 2.7272
Iteration: 3862; Percent complete: 96.5%; Average loss: 2.6660
Iteration: 3863; Percent complete: 96.6%; Average loss: 2.6256
Iteration: 3864; Percent complete: 96.6%; Average loss: 2.6408
Iteration: 3865; Percent complete: 96.6%; Average loss: 2.7654
Iteration: 3866; Percent complete: 96.7%; Average loss: 2.5126
Iteration: 3867; Percent complete: 96.7%; Average loss: 2.4345
Iteration: 3868; Percent complete: 96.7%; Average loss: 2.3693
Iteration: 3869; Percent complete: 96.7%; Average loss: 2.6644
Iteration: 3870; Percent complete: 96.8%; Average loss: 2.4808
Iteration: 3871; Percent complete: 96.8%; Average loss: 2.5966
Iteration: 3872; Percent complete: 96.8%; Average loss: 2.9151
Iteration: 3873; Percent complete: 96.8%; Average loss: 2.4610
Iteration: 3874; Percent complete: 96.9%; Average loss: 2.4201
Iteration: 3875; Percent complete: 96.9%; Average loss: 2.6686
Iteration: 3876; Percent complete: 96.9%; Average loss: 2.5236
Iteration: 3877; Percent complete: 96.9%; Average loss: 2.6586
Iteration: 3878; Percent complete: 97.0%; Average loss: 2.9590
Iteration: 3879; Percent complete: 97.0%; Average loss: 2.4302
Iteration: 3880; Percent complete: 97.0%; Average loss: 2.6391
Iteration: 3881; Percent complete: 97.0%; Average loss: 2.7817
Iteration: 3882; Percent complete: 97.0%; Average loss: 2.4755
Iteration: 3883; Percent complete: 97.1%; Average loss: 2.6718
Iteration: 3884; Percent complete: 97.1%; Average loss: 2.4861
Iteration: 3885; Percent complete: 97.1%; Average loss: 2.7285
Iteration: 3886; Percent complete: 97.2%; Average loss: 2.5305
Iteration: 3887; Percent complete: 97.2%; Average loss: 2.7437
Iteration: 3888; Percent complete: 97.2%; Average loss: 2.5581
Iteration: 3889; Percent complete: 97.2%; Average loss: 2.4240
Iteration: 3890; Percent complete: 97.2%; Average loss: 2.7892
Iteration: 3891; Percent complete: 97.3%; Average loss: 2.4825
Iteration: 3892; Percent complete: 97.3%; Average loss: 2.7610
Iteration: 3893; Percent complete: 97.3%; Average loss: 2.6128
Iteration: 3894; Percent complete: 97.4%; Average loss: 2.6241
Iteration: 3895; Percent complete: 97.4%; Average loss: 2.5221
Iteration: 3896; Percent complete: 97.4%; Average loss: 2.6023
Iteration: 3897; Percent complete: 97.4%; Average loss: 2.5605
Iteration: 3898; Percent complete: 97.5%; Average loss: 2.8383
Iteration: 3899; Percent complete: 97.5%; Average loss: 2.6867
Iteration: 3900; Percent complete: 97.5%; Average loss: 2.6954
Iteration: 3901; Percent complete: 97.5%; Average loss: 2.9551
Iteration: 3902; Percent complete: 97.5%; Average loss: 2.7275
Iteration: 3903; Percent complete: 97.6%; Average loss: 2.5986
Iteration: 3904; Percent complete: 97.6%; Average loss: 2.8466
Iteration: 3905; Percent complete: 97.6%; Average loss: 2.7360
Iteration: 3906; Percent complete: 97.7%; Average loss: 2.5660
Iteration: 3907; Percent complete: 97.7%; Average loss: 2.6321
Iteration: 3908; Percent complete: 97.7%; Average loss: 2.5921
Iteration: 3909; Percent complete: 97.7%; Average loss: 2.7634
Iteration: 3910; Percent complete: 97.8%; Average loss: 2.6278
Iteration: 3911; Percent complete: 97.8%; Average loss: 2.5330
Iteration: 3912; Percent complete: 97.8%; Average loss: 2.5203
Iteration: 3913; Percent complete: 97.8%; Average loss: 2.8304
Iteration: 3914; Percent complete: 97.9%; Average loss: 2.7509
Iteration: 3915; Percent complete: 97.9%; Average loss: 2.6143
Iteration: 3916; Percent complete: 97.9%; Average loss: 2.5132
Iteration: 3917; Percent complete: 97.9%; Average loss: 2.7283
Iteration: 3918; Percent complete: 98.0%; Average loss: 2.7983
Iteration: 3919; Percent complete: 98.0%; Average loss: 2.6531
Iteration: 3920; Percent complete: 98.0%; Average loss: 2.9959
Iteration: 3921; Percent complete: 98.0%; Average loss: 2.6197
Iteration: 3922; Percent complete: 98.0%; Average loss: 2.5391
Iteration: 3923; Percent complete: 98.1%; Average loss: 2.5306
Iteration: 3924; Percent complete: 98.1%; Average loss: 2.6708
Iteration: 3925; Percent complete: 98.1%; Average loss: 2.8038
Iteration: 3926; Percent complete: 98.2%; Average loss: 2.5462
Iteration: 3927; Percent complete: 98.2%; Average loss: 2.5932
Iteration: 3928; Percent complete: 98.2%; Average loss: 2.5263
Iteration: 3929; Percent complete: 98.2%; Average loss: 2.7180
Iteration: 3930; Percent complete: 98.2%; Average loss: 2.6819
Iteration: 3931; Percent complete: 98.3%; Average loss: 2.5960
Iteration: 3932; Percent complete: 98.3%; Average loss: 2.6368
Iteration: 3933; Percent complete: 98.3%; Average loss: 2.7104
Iteration: 3934; Percent complete: 98.4%; Average loss: 2.5781
Iteration: 3935; Percent complete: 98.4%; Average loss: 2.4760
Iteration: 3936; Percent complete: 98.4%; Average loss: 2.6243
Iteration: 3937; Percent complete: 98.4%; Average loss: 3.0587
Iteration: 3938; Percent complete: 98.5%; Average loss: 2.4542
Iteration: 3939; Percent complete: 98.5%; Average loss: 2.4842
Iteration: 3940; Percent complete: 98.5%; Average loss: 2.9324
Iteration: 3941; Percent complete: 98.5%; Average loss: 2.6023
Iteration: 3942; Percent complete: 98.6%; Average loss: 2.5699
Iteration: 3943; Percent complete: 98.6%; Average loss: 2.5725
Iteration: 3944; Percent complete: 98.6%; Average loss: 2.5354
Iteration: 3945; Percent complete: 98.6%; Average loss: 2.6709
Iteration: 3946; Percent complete: 98.7%; Average loss: 2.6047
Iteration: 3947; Percent complete: 98.7%; Average loss: 2.7458
Iteration: 3948; Percent complete: 98.7%; Average loss: 2.3845
Iteration: 3949; Percent complete: 98.7%; Average loss: 2.7017
Iteration: 3950; Percent complete: 98.8%; Average loss: 2.7932
Iteration: 3951; Percent complete: 98.8%; Average loss: 2.6110
Iteration: 3952; Percent complete: 98.8%; Average loss: 2.7973
Iteration: 3953; Percent complete: 98.8%; Average loss: 2.4332
Iteration: 3954; Percent complete: 98.9%; Average loss: 2.6172
Iteration: 3955; Percent complete: 98.9%; Average loss: 2.8037
Iteration: 3956; Percent complete: 98.9%; Average loss: 2.7974
Iteration: 3957; Percent complete: 98.9%; Average loss: 2.6918
Iteration: 3958; Percent complete: 99.0%; Average loss: 2.7703
Iteration: 3959; Percent complete: 99.0%; Average loss: 2.5863
Iteration: 3960; Percent complete: 99.0%; Average loss: 2.4469
Iteration: 3961; Percent complete: 99.0%; Average loss: 2.7529
Iteration: 3962; Percent complete: 99.1%; Average loss: 2.4848
Iteration: 3963; Percent complete: 99.1%; Average loss: 2.5866
Iteration: 3964; Percent complete: 99.1%; Average loss: 2.5649
Iteration: 3965; Percent complete: 99.1%; Average loss: 2.7373
Iteration: 3966; Percent complete: 99.2%; Average loss: 2.4828
Iteration: 3967; Percent complete: 99.2%; Average loss: 2.7134
Iteration: 3968; Percent complete: 99.2%; Average loss: 2.7393
Iteration: 3969; Percent complete: 99.2%; Average loss: 2.4111
Iteration: 3970; Percent complete: 99.2%; Average loss: 2.7646
Iteration: 3971; Percent complete: 99.3%; Average loss: 2.5467
Iteration: 3972; Percent complete: 99.3%; Average loss: 2.5964
Iteration: 3973; Percent complete: 99.3%; Average loss: 2.4795
Iteration: 3974; Percent complete: 99.4%; Average loss: 2.9199
Iteration: 3975; Percent complete: 99.4%; Average loss: 2.8328
Iteration: 3976; Percent complete: 99.4%; Average loss: 2.3931
Iteration: 3977; Percent complete: 99.4%; Average loss: 2.6231
Iteration: 3978; Percent complete: 99.5%; Average loss: 2.5572
Iteration: 3979; Percent complete: 99.5%; Average loss: 2.8275
Iteration: 3980; Percent complete: 99.5%; Average loss: 2.5213
Iteration: 3981; Percent complete: 99.5%; Average loss: 2.7527
Iteration: 3982; Percent complete: 99.6%; Average loss: 2.6679
Iteration: 3983; Percent complete: 99.6%; Average loss: 2.8183
Iteration: 3984; Percent complete: 99.6%; Average loss: 2.8614
Iteration: 3985; Percent complete: 99.6%; Average loss: 2.6987
Iteration: 3986; Percent complete: 99.7%; Average loss: 2.5570
Iteration: 3987; Percent complete: 99.7%; Average loss: 2.6784
Iteration: 3988; Percent complete: 99.7%; Average loss: 2.5042
Iteration: 3989; Percent complete: 99.7%; Average loss: 2.6452
Iteration: 3990; Percent complete: 99.8%; Average loss: 2.5970
Iteration: 3991; Percent complete: 99.8%; Average loss: 2.6767
Iteration: 3992; Percent complete: 99.8%; Average loss: 2.6095
Iteration: 3993; Percent complete: 99.8%; Average loss: 2.7266
Iteration: 3994; Percent complete: 99.9%; Average loss: 2.7654
Iteration: 3995; Percent complete: 99.9%; Average loss: 2.6808
Iteration: 3996; Percent complete: 99.9%; Average loss: 2.6709
Iteration: 3997; Percent complete: 99.9%; Average loss: 2.6427
Iteration: 3998; Percent complete: 100.0%; Average loss: 2.5011
Iteration: 3999; Percent complete: 100.0%; Average loss: 2.7171
Iteration: 4000; Percent complete: 100.0%; Average loss: 2.5403

 

評価を実行する

モデルとチャットするには、次のブロックを実行します。

# Set dropout layers to eval mode
encoder.eval()
decoder.eval()

# Initialize search module
searcher = GreedySearchDecoder(encoder, decoder)

# Begin chatting (uncomment and run the following line to begin)
# evaluateInput(encoder, decoder, searcher, voc)

 

終わりに

That’s all for this one, folks. Congratulations, 貴方は今では生成チャットボット・モデルを構築するための基礎を知っています!もし興味があれば、モデルを捻り、パラメータを訓練し、そして (その上で) モデルを訓練したデータをカスタマイズすることによりチャットボットの挙動を適合させてみることができます。

 
以上