根据第二个文本文件从文本文件中删除重复项

Remove duplicates from text file based on second text file

如何通过检查第二个文本文件 (removethese.txt) 从文本文件 (main.txt) 中删除所有行。如果文件大于 10-100mb,什么是有效方法。 [使用 mac]

示例:

main.txt
3
1
2
5

删除这些行

removethese.txt
3
2
9

输出:

output.txt
1
5

示例行(这些是我正在使用的实际行 - 顺序无关紧要):

ChIJW3p7Xz8YyIkRBD_TjKGJRS0
ChIJ08x-0kMayIkR5CcrF-xT6ZA
ChIJIxbjOykFyIkRzugZZ6tio1U
ChIJiaF4aOoEyIkR2c9WYapWDxM
ChIJ39HoPKDix4kRcfdIrxIVrqs
ChIJk5nEV8cHyIkRIhmxieR5ak8
ChIJs9INbrcfyIkRf0zLkA1NJEg
ChIJRycysg0cyIkRArqaCTwZ-E8
ChIJC8haxlUDyIkRfSfJOqwe698
ChIJxRVp80zpcEARAVmzvlCwA24
ChIJw8_LAaEEyIkR68nb8cpalSU
ChIJs35yqObit4kR05F4CXSHd_8
ChIJoRmgSdwGyIkRvLbhOE7xAHQ
ChIJaTtWBAWyVogRcpPDYK42-Nc
ChIJTUjGAqunVogR90Kc8hriW8c
ChIJN7P2NF8eVIgRwXdZeCjL5EQ
ChIJizGc0lsbVIgRDlIs85M5dBs
ChIJc8h6ZqccVIgR7u5aefJxjjc
ChIJ6YMOvOeYVogRjjCMCL6oQco
ChIJ54HcCsaeVogRIy9___RGZ6o
ChIJif92qn2YVogR87n0-9R5tLA
ChIJ0T5e1YaYVogRifrl7S_oeM8
ChIJwWGce4eYVogRcrfC5pvzNd4

grep:

grep -vxFf removethese.txt main.txt >output.txt

fgrep:

fgrep -vxf removethese.txt main.txt >output.txt

fgrep 已弃用。 fgrep --help 说:

Invocation as 'fgrep' is deprecated; use 'grep -F' instead.

awk(来自@fedorqui):

awk 'FNR==NR {a[[=12=]];next} !([=12=] in a)' removethese.txt main.txt >output.txt

sed:

sed "s=^=/^=;s=$=$/d=" removethese.txt | sed -f- main.txt >output.txt

如果 removethese.txt 包含特殊字符,这将失败。为此你可以这样做:

sed 's/[^^]/[&]/g; s/\^/\^/g' removethese.txt >newremovethese.txt

并在 sed 命令中使用此 newremovethese.txt。但这不值得付出努力,与其他方法相比,它太慢了。


对上述方法进行的测试:

sed 方法太耗时,不值得测试。

使用的文件:

removethese.txt : Size: 15191908 (15MB)     Blocks: 29672   Lines: 100233
main.txt : Size: 27640864 (27.6MB)      Blocks: 53992   Lines: 180034

命令:
grep -vxFf | fgrep -vxf | awk

拍摄时间:
0m7.966s | 0m7.823s | 0m0.237s
0m7.877s | 0m7.889s | 0m0.241s
0m7.971s | 0m7.844s | 0m0.234s
0m7.864s | 0m7.840s | 0m0.251s
0m7.798s | 0m7.672s | 0m0.238s
0m7.793s | 0m8.013s | 0m0.241s

平均
0m7.8782s | 0m7.8468s | 0m0.2403s

这个测试结果表明 fgrepgrep 快一点点。

awk方法(来自@fedorqui)以优异的成绩通过了测试(仅0.2403 seconds !!!)。

测试环境:

HP ProBook 440 G1 Laptop
8GB RAM
2.5GHz processor with turbo boost upto 3.1GHz
RAM being used: 2.1GB
Swap being used: 588MB
RAM being used when the grep/fgrep command is run: 3.5GB
RAM being used when the awk command is run: 2.2GB or less
Swap being used when the commands are run: 588MB (No change)

测试结果:

使用awk方法。

有两种标准方法可以做到这一点:

grep:

grep -vxFf removethese main

这使用:

  • -v 反转匹配。
  • -x 匹配整行,以防止,例如,he 匹配像 hellohighway to hell.
  • 这样的行
  • -F使用固定字符串,使参数按原样使用,而不是解释为正则表达式。
  • -f 从另一个文件中获取模式。在这种情况下,来自 removethese.

awk:

$ awk 'FNR==NR {a[[=11=]];next} !([=11=] in a)' removethese main
1
5

像这样,我们将 removethese 中的每一行存储在数组 a[] 中。然后,我们读取 main 文件并打印数组中不存在的那些行。

我喜欢@fedorqui 使用 awk 进行设置,其中一个人有足够的内存来容纳所有 "remove these" 行:内存方法的简洁表达。

但是对于要删除的行的大小相对于当前内存而言较大的场景,并且将该数据读入内存中的数据结构是失败或失败的邀请,请考虑一种古老的方法:sort/join

sort main.txt > main_sorted.txt
sort removethese.txt > removethese_sorted.txt

join -t '' -v 1 main_sorted.txt removethese_sorted.txt > output.txt

备注:

  • 这不会保留 main.txt 中的顺序:output.txt 中的行将被排序
  • 它需要足够的磁盘才能让排序完成它的工作(临时文件),并存储相同大小的输入文件的排序版本
  • 让 join 的 -v 选项做我们在这里想要的 - 从文件 1 打印 "unpairable",删除匹配 - 有点意外发现
  • 它不直接处理语言环境、整理、键等 - 它依赖于排序和连接的默认值(-t 带有空参数)来匹配排序顺序,这恰好在我当前的机器上工作

以下是我发现的许多简单有效的解决方案:http://www.catonmat.net/blog/set-operations-in-unix-shell-simplified/

您需要使用 Set Complement bash 命令之一。 100MB的文件秒级、分分钟搞定

设置会员资格

$ grep -xc 'element' set    # outputs 1 if element is in set
                            # outputs >1 if set is a multi-set
                            # outputs 0 if element is not in set

$ grep -xq 'element' set    # returns 0 (true)  if element is in set
                            # returns 1 (false) if element is not in set

$ awk '[=10=] == "element" { s=1; exit } END { exit !s }' set
# returns 0 if element is in set, 1 otherwise.

$ awk -v e='element' '[=10=] == e { s=1; exit } END { exit !s }'

设置相等

$ diff -q <(sort set1) <(sort set2) # returns 0 if set1 is equal to set2
                                    # returns 1 if set1 != set2

$ diff -q <(sort set1 | uniq) <(sort set2 | uniq)
# collapses multi-sets into sets and does the same as previous

$ awk '{ if (!([=11=] in a)) c++; a[[=11=]] } END{ exit !(c==NR/2) }' set1 set2
# returns 0 if set1 == set2
# returns 1 if set1 != set2

$ awk '{ a[[=11=]] } END{ exit !(length(a)==NR/2) }' set1 set2
# same as previous, requires >= gnu awk 3.1.5

设置基数

$ wc -l set | cut -d' ' -f1    # outputs number of elements in set

$ wc -l < set

$ awk 'END { print NR }' set

子集测试

$ comm -23 <(sort subset | uniq) <(sort set | uniq) | head -1
# outputs something if subset is not a subset of set
# does not putput anything if subset is a subset of set

$ awk 'NR==FNR { a[[=13=]]; next } { if !([=13=] in a) exit 1 }' set subset
# returns 0 if subset is a subset of set
# returns 1 if subset is not a subset of set

设置联合

$ cat set1 set2     # outputs union of set1 and set2
                    # assumes they are disjoint

$ awk 1 set1 set2   # ditto

$ cat set1 set2 ... setn   # union over n sets

$ cat set1 set2 | sort -u  # same, but assumes they are not disjoint

$ sort set1 set2 | uniq

# sort -u set1 set2

$ awk '!a[[=14=]]++'           # ditto

设置交集

$ comm -12 <(sort set1) <(sort set2)  # outputs insersect of set1 and set2

$ grep -xF -f set1 set2

$ sort set1 set2 | uniq -d

$ join <(sort -n A) <(sort -n B)

$ awk 'NR==FNR { a[[=15=]]; next } [=15=] in a' set1 set2

集补

$ comm -23 <(sort set1) <(sort set2)
# outputs elements in set1 that are not in set2

$ grep -vxF -f set2 set1           # ditto

$ sort set2 set2 set1 | uniq -u    # ditto

$ awk 'NR==FNR { a[[=16=]]; next } !([=16=] in a)' set2 set1

设置对称差异

$ comm -3 <(sort set1) <(sort set2) | sed 's/\t//g'
# outputs elements that are in set1 or in set2 but not both

$ comm -3 <(sort set1) <(sort set2) | tr -d '\t'

$ sort set1 set2 | uniq -u

$ cat <(grep -vxF -f set1 set2) <(grep -vxF -f set2 set1)

$ grep -vxF -f set1 set2; grep -vxF -f set2 set1

$ awk 'NR==FNR { a[[=17=]]; next } [=17=] in a { delete a[[=17=]]; next } 1;
       END { for (b in a) print b }' set1 set2

电源组

$ p() { [ $# -eq 0 ] && echo || (shift; p "$@") |
        while read r ; do echo -e " $r\n$r"; done }
$ p `cat set`

# no nice awk solution, you are welcome to email me one:
# peter@catonmat.net

设置笛卡尔积

$ while read a; do while read b; do echo "$a, $b"; done < set1; done < set2

$ awk 'NR==FNR { a[[=19=]]; next } { for (i in a) print i, [=19=] }' set1 set2

不相交集测试

$ comm -12 <(sort set1) <(sort set2)  # does not output anything if disjoint

$ awk '++seen[[=20=]] == 2 { exit 1 }' set1 set2 # returns 0 if disjoint
                                         # returns 1 if not

空集测试

$ wc -l < set            # outputs 0  if the set is empty
                         # outputs >0 if the set is not empty

$ awk '{ exit 1 }' set   # returns 0 if set is empty, 1 otherwise

最小值

$ head -1 <(sort set)    # outputs the minimum element in the set

$ awk 'NR == 1 { min = [=22=] } [=22=] < min { min = [=22=] } END { print min }'

最大

$ tail -1 <(sort set)    # outputs the maximum element in the set

$ awk '[=23=] > max { max = [=23=] } END { print max }'