3

I have a CSV file. For example columns:

col1, col2, col3, col4, col5 

I need to restructure it as

col1, col2, new_col1, col3, new_col2, col4, new_col3, col5. 

The values in the rows will be null. I am assuming this has to be done with .

I tried the questions already in StackOverflow, but all I am getting is to add at the end or beginning. I need to be able to specify column number/position where the column name needs to be added.

4
  • 3
    Please show the current attempt to see where the problem is. Commented Oct 11, 2018 at 9:15
  • 1
    Sorry, this is not the way StackOverflow works. Questions of the form "I want to do X, please give me tips and/or sample code" are considered off-topic. Please visit the help center and read How to Ask, and especially read Why is “Can someone help me?” not an actual question? Commented Oct 11, 2018 at 9:16
  • 1
    awk is perfect for this! $1 contains the 1st column, $2, the 2nd and so on. Good luck and update your question when you're stuck and need additional help Commented Oct 11, 2018 at 9:26
  • 2
    Your question is a bit unclear since you state: _ need to be able to specify column number/position where the column name needs to be added._ Does this mean that you want to process headers also? Commented Oct 11, 2018 at 10:07

2 Answers 2

2

remark: This answer assumes that no CSV field entry has <double-quoted> strings ("string") with embedded <comma>- or <newline>-characters.

If the above is not applicable to you problem, other approaches are needed. Useful links:


If you want to add a column to a CSV file, you can do the following:

add a column at the beginning:

awk 'BEGIN{FS=OFS=","}{print new_value OFS $0}' file 

add a column at the end:

awk 'BEGIN{FS=OFS=","}{print $0 OFS new_value}' file 

add a column before column n:

awk 'BEGIN{FS=OFS=","}{$n=new_value OFS $n}1' file 

add column after column n:

awk 'BEGIN{FS=OFS=","}{$n=$n OFS new_value}1' file 

add a column before each of column n1 < n2 < ... < nm: (start at the back)

awk 'BEGIN{FS=OFS=","; split("n1,n2,n3,...,nm",a)} {for(i=m;i>0;--i) $(a[i])=new_value OFS $(a[i])}1' file 

or for different values

awk 'BEGIN{FS=OFS=","; split("n1,n2,n3,...,nm",a); split("value1,value2,...,valuem",v)} {for(i=m;i>0;--i) $(a[i])=v[i] OFS $(a[i])}1' file 

add a column after each of column n1 < n2 < ... < nm: (start at the back)

awk 'BEGIN{FS=OFS=","; split("n1,n2,n3,...,nm",a)} {for(i=m;i>0;--i) $(a[i])=$(a[i]) OFS new_value}1' file 

or for different values

awk 'BEGIN{FS=OFS=","; split("n1,n2,n3,...,nm",a); split("value1,value2,...,valuem",v)} {for(i=m;i>0;--i) $(a[i])=$(a[i]) OFS v[i]}1' file 
Sign up to request clarification or add additional context in comments.

3 Comments

Nice answer, but you're really enabling "please give me tips and/or sample code", aren't you ;-> ? Good luck to all!
@shellter maybe true. If the question is interesting, I don't mind helping people out as I think it helps other people. Nonetheless, I will always put the obligatory message to make clear that this is not the way to go forward in the hope their next question might be more upto standards.
I hope it's obvious I don't really care, as there doesn't seem to be any cure for early posters inability to read and absorb the ?? number of pages attached to [help] ;-) .Eventually, you may be get tired of such questions, if not, you're a better person than I am ;-) .And again, a really perfect answer to this problem. Good luck to all.
1

Ok, let's say your csv looks like this:

col1, col2, col3, col4, col5 1, 2, 3, 4, 5 

Then, the command:

awk 'BEGIN{FS=OFS=","}{print $1,$2,"new_Col1",$3,"new_Col2",$4,"new_Col5",$5}' t.csv 

will give the following output:

col1, col2,new_Col1, col3,new_Col2, col4,new_Col5, col5 1, 2,new_Col1, 3,new_Col2, 4,new_Col5, 5 

You get the idea? Is that what you want to know?

Even easier to read:

awk '{print $1","$2",new_Col1,"$3",new_Col2,"$4",new_Col5,"$5}' t.csv 

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.