将分组 pandas DataFrame 转换为 3 维数组以进行序列预测
Converting grouped pandas DataFrame into 3-dimensional array for sequence prediction
我有一些高度结构化的数据,我正在尝试将其转换为一组数据点序列,以便使用 Keras 对数据进行序列预测。数据应该是形状 (sequence_count, max_sequence_length, feature_count)
的 3D 数组。然而,存储的数据被组织成比这更多的级别。
例如,在下面的人为数据中,我需要为每个DYAD
中的每个UTTERANCE
创建一个序列,实际特征是[=18] =] 和 SCORE
和 SEQ_ORDINALITY
是每个数据点在给定序列中出现的顺序:
DYAD | GAME_TURN | UTTERANCE | SEQ_ORDINALITY | WORD | SCORE
1 | 1 | 1 | 1 | it | 0.48
1 | 1 | 1 | 2 | is | 0.22
1 | 1 | 1 | 3 | yellow | 0.81
1 | 1 | 2 | 1 | the | 0.18
1 | 1 | 2 | 2 | big | 0.52
1 | 1 | 2 | 3 | one | 0.61
1 | 2 | 1 | 1 | now | 0.45
1 | 2 | 1 | 2 | it | 0.34
1 | 2 | 1 | 3 | is | 0.55
1 | 2 | 1 | 4 | green | 0.66
2 | 1 | 1 | 1 | okay | 0.23
2 | 1 | 1 | 2 | shall | 0.32
2 | 1 | 1 | 3 | we | 0.43
2 | 1 | 1 | 4 | start | 0.33
然后我怎样才能以最惯用的(pandaic?)方式为分组 (dyad, game_turn, utterance)
获取每个 (word, score)
? — 我假设有比例如通过更优雅的方式来做到这一点迭代每组 (dyad, game_turn, utterance)
中的每一行。
目前,我能够对序列进行分组并找到开始和结束数据点,但不知道下一步该怎么做:我的猜测是要么使用 DataFrame.pivot(..)
or DataFrame.stack(..)
重塑数据,要么为每个组添加一个特殊的 "start" 和 "end" 行标记,然后使用这些行作为分隔符迭代拆分原始 DataFrame
。有效的逻辑如下:
import pandas as pd
def read_token_sequences(infile):
df = pd.read_csv(infile)
utt_token_groups = df.groupby(("DYAD", "GAME_TURN", "UTTERANCE"))
# (sequence_count, max_sequence_length, feature_count)
sequences = utt_token_groups.apply(create_sequence)
def create_sequence(df: pd.DataFrame):
# TODO: create a 2D array of (sequence_length, features)
# with actual sequence length padded to equal max_sequence_length
# Possibilities: "DataFrame.stack(..)" or "DataFrame.pivot(..)"?
# Other possibility: Append a special "start sequence" row
# with "start["SEQ_ORDINALITY"] == df["SEQ_ORDINALITY"].min() - 1"
# and an "end sequence" row
# with "end["SEQ_ORDINALITY"] == df["SEQ_ORDINALITY"].max() + 1"
# Start of sequence
first_token = df.loc[df["SEQ_ORDINALITY"].idxmin()]
start = pd.Series(first_token, copy=True)
start["SEQ_ORDINALITY"] = result["SEQ_ORDINALITY"] - 1
# End of sequence
last_token = df.loc[df["SEQ_ORDINALITY"].idxmax()]
end = pd.Series(last_token, copy=True)
end["SEQ_ORDINALITY"] = result["SEQ_ORDINALITY"] + 1
预期输出
对于上面的示例数据,输出数组可能如下所示:
[
[["it", 0.48], ["is", 0.22], ["yellow", 0.81]],
[["the", 0.18], ["big", 0.52], ["one", 0.61]],
[["now", 0.45], ["it", 0.34], ["is", 0.55], ["green", 0.66]],
[["okay", 0.23], ["shall", 0.32], ["we", 0.43], ["start", 0.33]]
]
这是 groupby 的一种方式,即
df['new'] = (df['SEQ_ORDINALITY'].diff() != 1).cumsum().values
如果您没有序列,则将新列设置为:
df.sort_values("SEQ_ORDINALITY", inplace=True)
sequences = df.groupby(['DYAD','GAME_TURN','UTTERANCE'])
sequences['WORD','SCORE'].apply(lambda x : x.values.tolist()).tolist()
[[['it', 0.48], ['is', 0.22], ['yellow', 0.81]],
[['the', 0.18], ['big', 0.52], ['one', 0.61]],
[['now', 0.45], ['it', 0.34], ['is', 0.55], ['green', 0.66]],
[['okay', 0.23], ['shall', 0.32], ['we', 0.43], ['start', 0.33]]]
我有一些高度结构化的数据,我正在尝试将其转换为一组数据点序列,以便使用 Keras 对数据进行序列预测。数据应该是形状 (sequence_count, max_sequence_length, feature_count)
的 3D 数组。然而,存储的数据被组织成比这更多的级别。
例如,在下面的人为数据中,我需要为每个DYAD
中的每个UTTERANCE
创建一个序列,实际特征是[=18] =] 和 SCORE
和 SEQ_ORDINALITY
是每个数据点在给定序列中出现的顺序:
DYAD | GAME_TURN | UTTERANCE | SEQ_ORDINALITY | WORD | SCORE
1 | 1 | 1 | 1 | it | 0.48
1 | 1 | 1 | 2 | is | 0.22
1 | 1 | 1 | 3 | yellow | 0.81
1 | 1 | 2 | 1 | the | 0.18
1 | 1 | 2 | 2 | big | 0.52
1 | 1 | 2 | 3 | one | 0.61
1 | 2 | 1 | 1 | now | 0.45
1 | 2 | 1 | 2 | it | 0.34
1 | 2 | 1 | 3 | is | 0.55
1 | 2 | 1 | 4 | green | 0.66
2 | 1 | 1 | 1 | okay | 0.23
2 | 1 | 1 | 2 | shall | 0.32
2 | 1 | 1 | 3 | we | 0.43
2 | 1 | 1 | 4 | start | 0.33
然后我怎样才能以最惯用的(pandaic?)方式为分组 (dyad, game_turn, utterance)
获取每个 (word, score)
? — 我假设有比例如通过更优雅的方式来做到这一点迭代每组 (dyad, game_turn, utterance)
中的每一行。
目前,我能够对序列进行分组并找到开始和结束数据点,但不知道下一步该怎么做:我的猜测是要么使用 DataFrame.pivot(..)
or DataFrame.stack(..)
重塑数据,要么为每个组添加一个特殊的 "start" 和 "end" 行标记,然后使用这些行作为分隔符迭代拆分原始 DataFrame
。有效的逻辑如下:
import pandas as pd
def read_token_sequences(infile):
df = pd.read_csv(infile)
utt_token_groups = df.groupby(("DYAD", "GAME_TURN", "UTTERANCE"))
# (sequence_count, max_sequence_length, feature_count)
sequences = utt_token_groups.apply(create_sequence)
def create_sequence(df: pd.DataFrame):
# TODO: create a 2D array of (sequence_length, features)
# with actual sequence length padded to equal max_sequence_length
# Possibilities: "DataFrame.stack(..)" or "DataFrame.pivot(..)"?
# Other possibility: Append a special "start sequence" row
# with "start["SEQ_ORDINALITY"] == df["SEQ_ORDINALITY"].min() - 1"
# and an "end sequence" row
# with "end["SEQ_ORDINALITY"] == df["SEQ_ORDINALITY"].max() + 1"
# Start of sequence
first_token = df.loc[df["SEQ_ORDINALITY"].idxmin()]
start = pd.Series(first_token, copy=True)
start["SEQ_ORDINALITY"] = result["SEQ_ORDINALITY"] - 1
# End of sequence
last_token = df.loc[df["SEQ_ORDINALITY"].idxmax()]
end = pd.Series(last_token, copy=True)
end["SEQ_ORDINALITY"] = result["SEQ_ORDINALITY"] + 1
预期输出
对于上面的示例数据,输出数组可能如下所示:
[
[["it", 0.48], ["is", 0.22], ["yellow", 0.81]],
[["the", 0.18], ["big", 0.52], ["one", 0.61]],
[["now", 0.45], ["it", 0.34], ["is", 0.55], ["green", 0.66]],
[["okay", 0.23], ["shall", 0.32], ["we", 0.43], ["start", 0.33]]
]
这是 groupby 的一种方式,即
df['new'] = (df['SEQ_ORDINALITY'].diff() != 1).cumsum().values
如果您没有序列,则将新列设置为:
df.sort_values("SEQ_ORDINALITY", inplace=True)
sequences = df.groupby(['DYAD','GAME_TURN','UTTERANCE'])
sequences['WORD','SCORE'].apply(lambda x : x.values.tolist()).tolist()
[[['it', 0.48], ['is', 0.22], ['yellow', 0.81]],
[['the', 0.18], ['big', 0.52], ['one', 0.61]],
[['now', 0.45], ['it', 0.34], ['is', 0.55], ['green', 0.66]],
[['okay', 0.23], ['shall', 0.32], ['we', 0.43], ['start', 0.33]]]