字符串列表中字符串出现的双列表理解

Double list comprehension for occurrences of a string in a list of strings

我有两个列表列表:

text = [['hello this is me'], ['oh you know u']]
phrases = [['this is', 'u'], ['oh you', 'me']]

我需要拆分文本,使短语中出现的单词组合成为一个字符串:

result = [['hello', 'this is', 'me'], ['oh you', 'know', 'u']

我尝试使用 zip() 但它会连续遍历列表,而我需要检查每个列表。我也尝试了一个 find() 方法,但从这个例子中它也会找到所有字母 'u' 并将它们变成一个字符串(就像在单词 'you' 中它使它成为 'yo', 'u'). 我希望 replace() 在用列表替换字符串时也能工作,因为它可以让我做类似的事情:

for line in text:
        line = line.replace('this is', ['this is'])

但是尝试了所有方法,我仍然没有找到在这种情况下对我有用的方法。你能帮我吗?

试试这个。

import re

def filter_phrases(phrases):
    phrase_l = sorted(phrases, key=len)
    
    for i, v in enumerate(phrase_l):
        for j in phrase_l[i + 1:]:
            if re.search(rf'\b{v}\b', j):
                phrases.remove(v)
    
    return phrases


text = [
    ['hello this is me'], 
    ['oh you know u'],
    ['a quick brown fox jumps over the lazy dog']
]
phrases = [
    ['this is', 'u'], 
    ['oh you', 'me'],
    ['fox', 'brown fox']
]

# Flatten the `text` and `phrases` list
text = [
    line for l in text 
    for line in l
]
phrases = {
    phrase for l in phrases 
    for phrase in l
}

# If you're quite sure that your phrase
# list doesn't have any overlapping 
# zones, then I strongly recommend 
# against using this `filter_phrases()` 
# function.
phrases = filter_phrases(phrases)

result = []

for line in text:
    # This is the pattern to match the
    # 'space' before the phrases 
    # in the line on which the split
    # is to be done.
    l_phrase_1 = '|'.join([
        f'(?={phrase})' for phrase in phrases
        if re.search(rf'\b{phrase}\b', line)
    ])
    # This is the pattern to match the
    # 'space' after the phrases 
    # in the line on which the split
    # is to be done.
    l_phrase_2 = '|'.join([
        f'(?<={phrase})' for phrase in phrases
        if re.search(rf'\b{phrase}\b', line)
    ])
    
    # Now, we combine the both patterns
    # `l_phrase_1` and `l_phrase_2` to
    # create our master regex. 
    result.append(re.split(
        rf'\s(?:{l_phrase_1})|(?:{l_phrase_2})\s', 
        line
    ))
    
print(result)

# OUTPUT (PRETTY FORM)
#
# [
#     ['hello', 'this is', 'me'], 
#     ['oh you', 'know', 'u'], 
#     ['a quick', 'brown fox', 'jumps over the lazy dog']
# ]

在这里,我使用 re.split 在文本中的短语之前或之后进行拆分。

这是一个准完整的答案。入门指南:

假设:看看你的例子,我看不出为什么短语必须保持吐出,因为你的第二个文本在“短语”的第一个列表项中的“u”上拆分。

准备

将短语“list-of-lists”扁平化为一个列表。我在其他地方看到过 an example

flatten = lambda t: [item for sublist in t for item in sublist if item != '']

主要代码:

我的策略是查看文本列表中的每个项目(开始时它只是一个项目)并尝试拆分短语中的一个短语。如果发现拆分,则会发生更改(我用标记标记以进行跟踪),我将该列表替换为它的拆分副本然后展平(所以它都是一个列表)。如果发生更改,则从头开始循环(重新开始,因为无法判断“短语”列表中后面的内容是否也可以更早地拆分)

flatten = lambda t: [item for sublist in t for item in sublist if item != '']

text =[['hello this is me'], ['oh you know u']]
phrases = ['this is','u','oh you', 'me']

output = []
for t in text:
    t_copy = t
    no_change=1
    while no_change:
        for i,tc in enumerate(t_copy):
            for p in phrases:
                before = [tc] # each item is a string, my output is a list, must change to list to "compare apples to apples"
                found = re.split(f'({p})',tc)
                found = [f.strip() for f in found]
                if found != before:
                    t_copy[i] = found
                    t_copy = flatten(t_copy) # flatten to avoid 
                    no_change=0
                    break
                no_change=1
        output.append(t_copy)
print(output)

评论:

我修改了展平功能以删除空条目。我注意到如果你在端点发生的事情上分裂,则会添加一个空条目:("I love u" split on "u" > ["I love", "u", ''])

已用原发帖者澄清:

给定文本 pack my box with five dozen liquor jugs 和短语 five dozen

结果应该是:

['pack', 'my', 'box', 'with', 'five dozen', 'liquor', 'jugs']

不是:

['pack my box with', 'five dozen', 'liquor jugs']

每个文本和短语都被转换为 Python 个单词列表 ['this', 'is', 'an', 'example'],从而防止 'u' 在一个单词中匹配。

文本的所有可能子短语均由 compile_subphrases() 生成。 较长的短语(更多单词)首先生成,因此它们在较短的短语之前匹配。 'five dozen jugs' 总是优先于 'five dozen''five'.

进行匹配

使用列表切片比较短语和子短语,大致如下:

    text = ['five', 'dozen', 'liquor', 'jugs']
    phrase = ['liquor', 'jugs']
    if text[2:3] == phrase:
        print('matched')

使用这种比较短语的方法,脚本遍历原始文本,用挑选出的短语重写它。

texts = [['hello this is me'], ['oh you know u']]
phrases_to_match = [['this is', 'u'], ['oh you', 'me']]
from itertools import chain

def flatten(list_of_lists):
    return list(chain(*list_of_lists))

def compile_subphrases(text, minwords=1, include_self=True):
    words = text.split()
    text_length = len(words)
    max_phrase_length = text_length if include_self else text_length - 1
    # NOTE: longest phrases first
    for phrase_length in range(max_phrase_length + 1, minwords - 1, -1):
        n_length_phrases = (' '.join(words[r:r + phrase_length])
                            for r in range(text_length - phrase_length + 1))
        yield from n_length_phrases
        
def match_sublist(mainlist, sublist, i):
    if i + len(sublist) > len(mainlist):
        return False
    return sublist == mainlist[i:i + len(sublist)]

phrases_to_match = list(flatten(phrases_to_match))
texts = list(flatten(texts))
results = []
for raw_text in texts:
    print(f"Raw text: '{raw_text}'")
    matched_phrases = [
        subphrase.split()
        for subphrase
        in compile_subphrases(raw_text)
        if subphrase in phrases_to_match
    ]
    phrasal_text = []
    index = 0
    text_words = raw_text.split()
    while index < len(text_words):
        for matched_phrase in matched_phrases:
            if match_sublist(text_words, matched_phrase, index):
                phrasal_text.append(' '.join(matched_phrase))
                index += len(matched_phrase)
                break
        else:
            phrasal_text.append(text_words[index])
            index += 1
    results.append(phrasal_text)
print(f'Phrases to match: {phrases_to_match}')
print(f"Results: {results}")

结果:

$python3 main.py
Raw text: 'hello this is me'
Raw text: 'oh you know u'
Phrases to match: ['this is', 'u', 'oh you', 'me']
Results: [['hello', 'this is', 'me'], ['oh you', 'know', 'u']]

要用更大的数据集测试这个答案和其他答案,请在代码开头尝试这个。它在单个长句子上生成数百个变体来模拟 100 个文本。

from itertools import chain, combinations
import random

#texts = [['hello this is me'], ['oh you know u']]
theme = ' '.join([
    'pack my box with five dozen liquor jugs said',
    'the quick brown fox as he jumped over the lazy dog'
])
variations = list([
    ' '.join(combination)
    for combination
    in combinations(theme.split(), 5)
])
texts = random.choices(variations, k=500)
#phrases_to_match = [['this is', 'u'], ['oh you', 'me']]
phrases_to_match = [
    ['pack my box', 'quick brown', 'the quick', 'brown fox'],
    ['jumped over', 'lazy dog'],
    ['five dozen', 'liquor', 'jugs']
]

这使用了 Python 的最佳 class 列表切片。 phrase[::2] 创建一个由列表的第 0、2、4、6... 元素组成的列表切片。这是以下解决方案的基础。

对于每个短语,| 符号放在找到的短语的两边。下面显示'this is'被标记在'hello this is me'

'hello this is me' -> 'hello|this is|me'

当文本在 | 上拆分时:

['hello', 'this is', 'me']

偶数元素[::2]为不匹配,奇数元素[1::2]为匹配词组:

                   0         1       2
unmatched:     ['hello',            'me']
matched:                 'this is',       

如果段中有不同数量的匹配和不匹配元素,则使用 zip_longest 用空字符串填充空白,以便始终有一对平衡的不匹配和匹配文本:

                   0         1       2     3
unmatched:     ['hello',            'me',     ]
matched:                 'this is',        ''  

对于每个短语,扫描文本中先前不匹配(偶数)的元素,短语(如果找到)用 | 分隔,结果合并回分段文本。

使用 zip() 后接 flatten() 将匹配和不匹配的段合并回分段文本,注意维护新的和现有的偶数(不匹配)和奇数(匹配)索引文本段。新匹配的短语作为奇数元素合并回,因此不会再次扫描它们以查找嵌入的短语。这可以防止具有相似措辞的短语之间发生冲突,例如“this is”和“this”。

flatten() 无处不在。它找到嵌入在更大列表中的子列表,并将其内容展平到主列表中:

['outer list 1', ['inner list 1', 'inner list 2'], 'outer list 2']

变成:

['outer list 1', 'inner list 1', 'inner list 2', 'outer list 2']

这对于从多个嵌入列表中收集短语以及将拆分或压缩子列表合并回分段文本很有用:

[['the quick brown fox says', ''], ['hello', 'this is', 'me', '']] ->

['the quick brown fox says', '', 'hello', 'this is', 'me', ''] ->

                   0                        1       2        3          4     5
unmatched:     ['the quick brown fox says',         'hello',            'me',    ]
matched:                                    '',              'this is',       '',

最后可以去掉空串元素,只是为了奇偶对齐:

['the quick brown fox says', '', 'hello', 'this is', '', 'me', ''] ->
['the quick brown fox says', 'hello', 'this is', 'me']
texts = [['hello this is me'], ['oh you know u'],
         ['the quick brown fox says hello this is me']]
phrases_to_match = [['this is', 'u'], ['oh you', 'you', 'me']]
from itertools import zip_longest

def flatten(string_list):
    flat = []
    for el in string_list:
        if isinstance(el, list) or isinstance(el, tuple):
            flat.extend(el)
        else:
            flat.append(el)
    return flat

phrases_to_match = flatten(phrases_to_match)
# longer phrases are given priority to avoid problems with overlapping
phrases_to_match.sort(key=lambda phrase: -len(phrase.split()))
segmented_texts = []
for text in flatten(texts):
    segmented_text = text.split('|')
    for phrase in phrases_to_match:
        new_segments = segmented_text[::2]
        delimited_phrase = f'|{phrase}|'
        for match in [f' {phrase} ', f' {phrase}', f'{phrase} ']:
            new_segments = [
                segment.replace(match, delimited_phrase)
                for segment
                in new_segments
            ]
        new_segments = flatten([segment.split('|') for segment in new_segments])
        segmented_text = new_segments if len(segmented_text) == 1 else \
            flatten(zip_longest(new_segments, segmented_text[1::2], fillvalue=''))
    segmented_text = [segment for segment in segmented_text if segment.strip()]
    # option 1: unmatched text is split into words
    segmented_text = flatten([
        segment if segment in phrases_to_match else segment.split()
        for segment
        in segmented_text
    ])
    segmented_texts.append(segmented_text)
print(segmented_texts)

结果:

[['hello', 'this is', 'me'], ['oh you', 'know', 'u'],
 ['the', 'quick', 'brown', 'fox', 'says', 'hello', 'this is', 'me']]

请注意短语 'oh you' 优先于子短语 'you' 并且没有冲突。