113

I have one file with -| as delimiter after each section...need to create separate files for each section using unix.

example of input file

wertretr ewretrtret 1212132323 000232 -| ereteertetet 232434234 erewesdfsfsfs 0234342343 -| jdhg3875jdfsgfd sjdhfdbfjds 347674657435 -| 

Expected result in File 1

wertretr ewretrtret 1212132323 000232 -| 

Expected result in File 2

ereteertetet 232434234 erewesdfsfsfs 0234342343 -| 

Expected result in File 3

jdhg3875jdfsgfd sjdhfdbfjds 347674657435 -| 
2
  • 1
    Are you writing a program or do you want to do this using command line utilities? Commented Jul 3, 2012 at 15:13
  • 2
    using command line utilities will be preferable.. Commented Jul 3, 2012 at 15:27

12 Answers 12

123

A one liner, no programming. (except the regexp etc.)

csplit --digits=2 --quiet --prefix=outfile infile "/-|/+1" "{*}" 

tested on: csplit (GNU coreutils) 8.30

Notes about usage on Apple Mac

"For OS X users, note that the version of csplit that comes with the OS doesn't work. You'll want the version in coreutils (installable via Homebrew), which is called gcsplit." — @Danial

"Just to add, you can get the version for OS X to work (at least with High Sierra). You just need to tweak the args a bit csplit -k -f=outfile infile "/-\|/+1" "{3}". Features that don't seem to work are the "{*}", I had to be specific on the number of separators, and needed to add -k to avoid it deleting all outfiles if it can't find a final separator. Also if you want --digits, you need to use -n instead." — @Pebbl

Sign up to request clarification or add additional context in comments.

9 Comments

+1 - shorter: csplit -n2 -s -b outfile infile "/-|/+1" "{*}"
@zb226 I did it in long, so that no explanation was needed.
I suggest to add --elide-empty-files, otherwise there will be a empty file at the end.
Just for those who wonder what the parameters mean: --digits=2 controls the number of digits used to number the output files (2 is default for me, so not necessary). --quiet suppresses output (also not really necessary or asked for here). --prefix specifies the prefix of the output files (default is xx). So you can skip all the parameters and will get output files like xx12.
I have updated the question to include the un-read comments about apple mac.
|
49
awk '{f="file" NR; print $0 " -|"> f}' RS='-\\|' input-file 

Explanation (edited):

RS is the record separator, and this solution uses a gnu awk extension which allows it to be more than one character. NR is the record number.

The print statement prints a record followed by " -|" into a file that contains the record number in its name.

13 Comments

RS is the record separator, and this solution uses a gnu awk extension which allows it to be more than one character. NR is the record number. The print statement prints a record followed by " -|" into a file that contains the record number in its name.
@rzetterbeg This should work well with large files. awk processes the file one record at a time, so it only reads as much as it needs to. If the first occurrence of the record separator shows up very late in the file, it may be a memory crunch since one whole record must fit into memory. Also, note that using more than one character in RS is not standard awk, but this will work in gnu awk.
For me it split 3.3 GB in 31.728s
@ccf The filename is just the string on the right side of the >, so you can construct it however you like. eg, print $0 "-|" > "file" NR ".txt"
@AGrush That is version dependent. You can do awk '{f="file" NR; print $0 " -|" > f}'
|
7

Debian has csplit, but I don't know if that's common to all/most/other distributions. If not, though, it shouldn't be too hard to track down the source and compile it...

3 Comments

I agree. My Debian box says that csplit is part of gnu coreutils. So any Gnu operating system, such as all the Gnu/Linux distros will have it. Wikipedia also mentions 'The Single UNIX® Specification, Issue 7' on the csplit page, so I suspect you got it.
Since csplit is in POSIX, I would expect it to be available on essentially all Unix-like systems.
Although csplit is POISX, the problem (it seems doing a test with it on the Ubuntu system sitting in front of me) is that there is no obvious way to make it use a more modern regex syntax. Compare: csplit --prefix gold-data - "/^==*$/ vs csplit --prefix gold-data - "/^=+$/. At least GNU grep has -e.
5

I solved a slightly different problem, where the file contains a line with the name where the text that follows should go. This perl code does the trick for me:

#!/path/to/perl -w #comment the line below for UNIX systems use Win32::Clipboard; # Get command line flags #print ($#ARGV, "\n"); if($#ARGV == 0) { print STDERR "usage: ncsplit.pl --mff -- filename.txt [...] \n\nNote that no space is allowed between the '--' and the related parameter.\n\nThe mff is found on a line followed by a filename. All of the contents of filename.txt are written to that file until another mff is found.\n"; exit; } # this package sets the ARGV count variable to -1; use Getopt::Long; my $mff = ""; GetOptions('mff' => \$mff); # set a default $mff variable if ($mff eq "") {$mff = "-#-"}; print ("using file switch=", $mff, "\n\n"); while($_ = shift @ARGV) { if(-f "$_") { push @filelist, $_; } } # Could be more than one file name on the command line, # but this version throws away the subsequent ones. $readfile = $filelist[0]; open SOURCEFILE, "<$readfile" or die "File not found...\n\n"; #print SOURCEFILE; while (<SOURCEFILE>) { /^$mff (.*$)/o; $outname = $1; # print $outname; # print "right is: $1 \n"; if (/^$mff /) { open OUTFILE, ">$outname" ; print "opened $outname\n"; } else {print OUTFILE "$_"}; } 

3 Comments

Can you please explain why this code works? I have a similar situation to what you've described here - the required output file names are embedded inside the file. But I'm not a regular perl user so can't quite make sense of this code.
The real beef is in the final while loop. If it finds the mff regex at beginning of line, it uses the rest of the line as the filename to open and start writing to. It never closes anything so it will run out of file handles after a few dozen.
The script would actually be improved by removing most of the code before the final while loop and switching to while (<>)
4

The following command works for me. Hope it helps.

awk 'BEGIN{file = 0; filename = "output_" file ".txt"} /-|/ {getline; file ++; filename = "output_" file ".txt"} {print $0 > filename}' input 

7 Comments

This will run out of file handles after typically a few dozen files. The fix is to explicitly close the old file when you start a new one.
@tripleee how do you close it (beginner awk question). Can you provide an updated example?
@JesperRønn-Jensen This box is probably too small for any useful example but basically if (file) close(filename); before assigning a new filename value.
aah found out how to close it: ; close(filename). Really simple, but it really fixes the example above
@JesperRønn-Jensen I rolled back your edit because you provided a broken script. Significant edits to other people's answers should probably be avoided -- feel free to post a new answer of your own (perhaps as a community wiki) if you think a separate answer is merited.
|
3

You can also use awk. I'm not very familiar with awk, but the following did seem to work for me. It generated part1.txt, part2.txt, part3.txt, and part4.txt. Do note, that the last partn.txt file that this generates is empty. I'm not sure how fix that, but I'm sure it could be done with a little tweaking. Any suggestions anyone?

awk_pattern file:

BEGIN{ fn = "part1.txt"; n = 1 } { print > fn if (substr($0,1,2) == "-|") { close (fn) n++ fn = "part" n ".txt" } } 

bash command:

awk -f awk_pattern input.file

Comments

2

Here's a Python 3 script that splits a file into multiple files based on a filename provided by the delimiters. Example input file:

# Ignored ######## FILTER BEGIN foo.conf This goes in foo.conf. ######## FILTER END # Ignored ######## FILTER BEGIN bar.conf This goes in bar.conf. ######## FILTER END 

Here's the script:

#!/usr/bin/env python3 import os import argparse # global settings start_delimiter = '######## FILTER BEGIN' end_delimiter = '######## FILTER END' # parse command line arguments parser = argparse.ArgumentParser() parser.add_argument("-i", "--input-file", required=True, help="input filename") parser.add_argument("-o", "--output-dir", required=True, help="output directory") args = parser.parse_args() # read the input file with open(args.input_file, 'r') as input_file: input_data = input_file.read() # iterate through the input data by line input_lines = input_data.splitlines() while input_lines: # discard lines until the next start delimiter while input_lines and not input_lines[0].startswith(start_delimiter): input_lines.pop(0) # corner case: no delimiter found and no more lines left if not input_lines: break # extract the output filename from the start delimiter output_filename = input_lines.pop(0).replace(start_delimiter, "").strip() output_path = os.path.join(args.output_dir, output_filename) # open the output file print("extracting file: {0}".format(output_path)) with open(output_path, 'w') as output_file: # while we have lines left and they don't match the end delimiter while input_lines and not input_lines[0].startswith(end_delimiter): output_file.write("{0}\n".format(input_lines.pop(0))) # remove end delimiter if present if not input_lines: input_lines.pop(0) 

Finally here's how you run it:

$ python3 script.py -i input-file.txt -o ./output-folder/ 

Comments

2

Use csplit if you have it.

If you don't, but you have Python... don't use Perl.

Lazy reading of the file

Your file may be too large to hold in memory all at once - reading line by line may be preferable. Assume the input file is named "samplein":

$ python3 -c "from itertools import count with open('samplein') as file: for i in count(): firstline = next(file, None) if firstline is None: break with open(f'out{i}', 'w') as out: out.write(firstline) for line in file: out.write(line) if line == '-|\n': break" 

2 Comments

This will read the entire file into memory, which means it will be inefficient or even fail for large files.
@tripleee I have updated the answer to handle very large files.
1
cat file| ( I=0; echo -n "">file0; while read line; do echo $line >> file$I; if [ "$line" == '-|' ]; then I=$[I+1]; echo -n "" > file$I; fi; done ) 

and the formated version:

#!/bin/bash cat FILE | ( I=0; echo -n"">file0; while read line; do echo $line >> file$I; if [ "$line" == '-|' ]; then I=$[I+1]; echo -n "" > file$I; fi; done; ) 

6 Comments

@Reishin The linked page explains in much more detail how you can avoid cat on a single file in every situation. There is a Stack Overflow question with more discussion (though the accepted answer is IMHO off); stackoverflow.com/questions/11710552/useless-use-of-cat
The shell is typically very inefficient at this sort of thing anyway; if you can't use csplit, an Awk solution is probably much preferrable to this solution (even if you were to fix the problems reported by shellcheck.net etc; note that it doesn't currently find all the bugs in this).
@tripleee but if the task is to do it without awk, csplit and etc - only bash?
Then the cat is still useless, and the rest of the script could be simplified and corrected a good deal; but it will still be slow. See e.g. stackoverflow.com/questions/13762625/…
|
0

This is the sort of problem I wrote context-split for: http://stromberg.dnsalias.org/~strombrg/context-split.html

$ ./context-split -h usage: ./context-split [-s separator] [-n name] [-z length] -s specifies what regex should separate output files -n specifies how output files are named (default: numeric -z specifies how long numbered filenames (if any) should be -i include line containing separator in output files operations are always performed on stdin 

2 Comments

Uh, this looks like essentially a duplicate of the standard csplit utility. See @richard's answer.
This is actually the best solution imo. I've had to split a 98G mysql dump and csplit for some reason eats up all my RAM, and is killed. Even though it should only need to match one line at the time. Makes no sense. This python script works much better and doesn't eat up all the ram.
0

Here is a perl code that will do the thing

#!/usr/bin/perl open(FI,"file.txt") or die "Input file not found"; $cur=0; open(FO,">res.$cur.txt") or die "Cannot open output file $cur"; while(<FI>) { print FO $_; if(/^-\|/) { close(FO); $cur++; open(FO,">res.$cur.txt") or die "Cannot open output file $cur" } } close(FO); 

Comments

0

Try this python script:

import os import argparse delimiter = '-|' parser = argparse.ArgumentParser() parser.add_argument("-i", "--input-file", required=True, help="input txt") parser.add_argument("-o", "--output-dir", required=True, help="output directory") args = parser.parse_args() counter = 1; output_filename = 'part-'+str(counter) with open(args.input_file, 'r') as input_file: for line in input_file.read().split('\n'): if delimiter in line: counter = counter+1 output_filename = 'part-'+str(counter) print('Section '+str(counter)+' Started') else: #skips empty lines (change the condition if you want empty lines too) if line.strip() : output_path = os.path.join(args.output_dir, output_filename+'.txt') with open(output_path, 'a') as output_file: output_file.write("{0}\n".format(line)) 

ex:

python split.py -i ./to-split.txt -o ./output-dir

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.