removing duplicates from a file from starting point of a file -


input file

a b c d b c f e 

i tried below options:

awk '!x[$0]++' file.txt > file.txt.tmp && mv file.txt.tmp file.txt  perl -ne 'print unless $dup{$_}++;' file.txt > file.txt.tmp && mv file.txt.tmp file.txt awk '{if (++dup[$0] == 1) print $0;}' file.txt > file.txt.tmp && mv file.txt.tmp file.txt 

but removes duplicates , gives output below:

a b c d f e 

but need output below.

output file

 d   b  c  f  e   

i got answer below.

awk -f'|' '{k=$1 fs $2} nr==fnr {a[k]=nr; next} a[k]==fnr' file.txt file.txt


Comments

Popular posts from this blog

php - How to add and update images or image url in Volusion using Volusion API -

javascript - IE9 error '$'is not defined -