AWK remove duplicate lines on Linux

I need to remove the duplicate lines keeping their order in the file. How can I remove duplicate lines with awk?

My input.txt for awk

test
bar

foo
foo
bar 
bar foo

foo bar

Desired output that awk should produced by removing duplicate lines whereas keeping empty lines as it is?

test
bar

foo
bar foo

foo bar

This one easy:

awk '!NF {print; next}; !($0 in f) {f[$0]; print}' input

# save to the output file without updating input file
awk '!NF {print; next}; !($0 in f) {f[$0]; print}' input > output

# verify all files 
cat input
cat output
diff input output

Another AWK command option for removing duplicate lines on Linux and Unix-like systems

First, look for an empty or blank line first and then remove duplicate line using AWK

awk '/^[[:blank:]]*$/ { print; next; }; !seen[$0]++' input_file
awk '/^[[:blank:]]*$/ { print; next; }; !seen[$0]++' input_file > sorted_file

Linux sysadmin blog - Linux/Unix Howtos and Tutorials - Linux bash shell scripting wiki