Concatenate and sort to remove duplicates from a text file on Linux

paste command with uniq Following is the input. 1st and 3st are same , 2nd and 4th are also same:

* Wed Feb 24 2016 Tariq Saeed <tariq.x.saeed@mail.com> 2.0.7-1.0.7

- add vmcore dump support for ocfs2 [bug: 22822573]



* Mon Jun 8 2015 Brian Maly <brian.maly@mail.com> 2.0.7-1.0.3

- Fix stall on failure in kdump init script [bug: 21111440]

- kexec-tools: fix fail to find mem hole failure on i386 [bug: 21111440]



* Wed Feb 24 2016 Tariq Saeed <tariq.x.saeed@mail.com> 2.0.7-1.0.7

- add vmcore dump support for ocfs2 [bug: 22822573]



* Mon Jun 8 2015 Brian Maly <brian.maly@mail.com> 2.0.7-1.0.3

- Fix stall on failure in kdump init script [bug: 21111440]

- kexec-tools: fix fail to find mem hole failure on i386 [bug: 21111440]

Expected Output:

* Wed Feb 24 2016 Tariq Saeed <tariq.x.saeed@mail.com> 2.0.7-1.0.7

- add vmcore dump support for ocfs2 [bug: 22822573]



* Mon Jun 8 2015 Brian Maly <brian.maly@mail.com> 2.0.7-1.0.3

- Fix stall on failure in kdump init script [bug: 21111440]

- kexec-tools: fix fail to find mem hole failure on i386 [bug: 21111440]

I have picked only four entries of the file. There are myriad entries with duplicates.

I thought of combining lines and run uniq command but some entries have two lines but some entries have 3 lines.

cat | paste -d - - | uniq (This may not work for files more than 2 lines)

Try something as follows:

awk '!x[$0]++' input.txt
awk '!x[$0]++' input.txt > output.txt