Using grep sort and uniq then saving output to files

Hi all,

I’m trying to split out a large file into many files but am struggling to find the correct command:

My data is as follows:

subscriber,account_type,bill_date
447447447123,939,text,23,
447447447999,939,text,23,
447447447789,914,text,10,
447447447345,900,text,15,

I have over 2 million entries in this file and what i would like to do is save each data into a file depending on uniq 2nd and 4th column.

so the output of the files would be like:

cat 939_23.txt

447447447123,939,text,23,
447447447999,939,text,23,

cat 914_10.txt

447447447789,914,text,10,

cat 900_15.txt
447447447345,900,text,15,

Can anyone help with with a neat Perl one liner, or perhaps a for look command ?

Try awk on input file (assuming that input file is already sorted with sort|uniq):

awk -F',' '{ if ($2 == 939) print $0}' input > 939_23.txt
awk -F',' '{ if ($2 == 914) print $0}' input > 914_10.txt
awk -F',' '{ if ($2 == 900) print $0}' input > 900_15.txt

Surely there’s a better way, there are 900 different account types in the 2nd column and near 30 bill dates - I was hoping someone could come up with a for loop or a nested one

Of cousre you can do that too. Here is a sample:

for i in 939 914 900
do 
  awk -v c=$i -F',' '{ output=c "_" $4 ".txt"} { if ($2 == c) print $0 >>output}' input.txt
done
1 Like

perfect, that’s done the trick :slight_smile:

Thanks Nixcraft

Jeffers .