285

How do I read a file into a std::string, i.e., read the whole file at once?

Text or binary mode should be specified by the caller. The solution should be standard-compliant, portable and efficient. It should not needlessly copy the string's data, and it should avoid reallocations of memory while reading the string.

One way to do this would be to stat the filesize, resize the std::string and fread() into the std::string's const_cast<char*>()'ed data(). This requires the std::string's data to be contiguous which is not required by the standard, but it appears to be the case for all known implementations. What is worse, if the file is read in text mode, the std::string's size may not equal the file's size.

A fully correct, standard-compliant and portable solutions could be constructed using std::ifstream's rdbuf() into a std::ostringstream and from there into a std::string. However, this could copy the string data and/or needlessly reallocate memory.

  • Are all relevant standard library implementations smart enough to avoid all unnecessary overhead?
  • Is there another way to do it?
  • Did I miss some hidden Boost function that already provides the desired functionality?


void slurp(std::string& data, bool is_binary) 
7
  • 2
    Text and binary mode are MSDOS & Windows specific hacks that try to get around the fact that newlines are represented by two characters in Windows (CR/LF). In text mode, they are treated as one character ('\n'). Commented Sep 22, 2008 at 16:54
  • 2
    Although not (quite) an exactly duplicate, this is closely related to: how to pre-allocate memory for a std::string object? (which, contrary to Konrad's statement above, included code to do this, reading the file directly into the destination, without doing an extra copy). Commented Sep 21, 2012 at 15:11
  • 2
    "contiguous is not required by the standard" - yes it is, in a roundabout way. As soon as you use op[] on the string, it must be coalesced into a contiguous writable buffer, so it is guaranteed safe to write to &str[0] if you .resize() large enough first. And in C++11, string is simply always contiguous. Commented Jul 19, 2013 at 17:22
  • 4
    Related link: How to read a file in C++? -- benchmarks and discusses the various approaches. And yes, rdbuf (the one in the accepted answer) isn't the fastest, read is. Commented Nov 27, 2014 at 4:24
  • 3
    All of these solutions will lead to mal-formed strings if your file-encoding/interpratation is incorrect. I was having a really weird issue when serializing a JSON file into a string until I manually converted it to UTF-8; I was only ever getting the first character no matter what solution I tried! Just a gotcha to watch out for! :) Commented Nov 1, 2018 at 4:18

24 Answers 24

180

One way is to flush the stream buffer into a separate memory stream, and then convert that to std::string (error handling omitted):

std::string slurp(std::ifstream& in) { std::ostringstream sstr; sstr << in.rdbuf(); return sstr.str(); } 

This is nicely concise. However, as noted in the question this performs a redundant copy and unfortunately there is fundamentally no way of eliding this copy.

The only real solution that avoids redundant copies is to do the reading manually in a loop, unfortunately. Since C++ now has guaranteed contiguous strings, one could write the following (≥C++17, error handling included):

auto read_file(std::string_view path) -> std::string { constexpr auto read_size = std::size_t(4096); auto stream = std::ifstream(path.data()); stream.exceptions(std::ios_base::badbit); if (not stream) { throw std::ios_base::failure("file does not exist"); } auto out = std::string(); auto buf = std::string(read_size, '\0'); while (stream.read(& buf[0], read_size)) { out.append(buf, 0, stream.gcount()); } out.append(buf, 0, stream.gcount()); return out; } 
Sign up to request clarification or add additional context in comments.

28 Comments

@sehe: I would expect any halfway-competent C++ coder to readily understand that one-liner. It's pretty tame compared to other stuff being around.
@DevSolar Well, the more legible version is ~30% shorter, lacks a cast and is otherwise equivalent. My question therefore stands: "What's the point of making it a oneliner?"
Just wanted to add, that for someone learning C++, this is hard to understand at first glance.
note: this method reads the file into the stringstream's buffer, then copies that whole buffer into the string. I.e. requiring twice as much memory as some of the other options. (There is no way to move the buffer). For a large file this would be a significant penalty, perhaps even causing an allocation failure .
@DanNissenbaum You're confusing something. Conciseness is indeed important in programming, but the proper way to achieve it is to decompose the problem into parts and encapsulate those into independent units (functions, classes, etc). Adding functions doesn't detract from conciseness; quite the contrary.
|
94

The shortest variant: Live On Coliru

std::string str(std::istreambuf_iterator<char>{ifs}, {}); 

It requires the header <iterator>.

There were some reports that this method is slower than preallocating the string and using std::istream::read. However, on a modern compiler with optimisations enabled this no longer seems to be the case, though the relative performance of various methods seems to be highly compiler dependent.

7 Comments

Could you exapnd on this answer. How efficent is it, does it read a file a char at a time, anyway to preallocate the stirng memory?
@M.M The way I read that comparison, this method is slower than the pure C++ reading-into-a-preallocated-buffer method.
You're right, it's a case of the title being under the code sample, rather than above it :)
Will this method trigger memory reallocation for many times ?
@coincheung This should avoid repeat allocations but, in practice, it stupidly doesn’t. The “canonical” way of reading a whole file in C++17 is gist.github.com/klmr/849cbb0c6e872dff0fdcc54787a66103. Unfortunately very verbose.
|
56

See this answer on a similar question.

For your convenience, I'm reposting CTT's solution:

string readFile2(const string &fileName) { ifstream ifs(fileName.c_str(), ios::in | ios::binary | ios::ate); ifstream::pos_type fileSize = ifs.tellg(); ifs.seekg(0, ios::beg); vector<char> bytes(fileSize); ifs.read(bytes.data(), fileSize); return string(bytes.data(), fileSize); } 

This solution resulted in about 20% faster execution times than the other answers presented here, when taking the average of 100 runs against the text of Moby Dick (1.3M). Not bad for a portable C++ solution, I would like to see the results of mmap'ing the file ;)

16 Comments

related: time performance comparison of various methods: Reading in an entire file at once in C++
Up until today, I have never witnessed tellg() reporting non-filesize results. Took me hours to find the source of the bug. Please do not use tellg() to get the file size. stackoverflow.com/questions/22984956/…
also you need to check for empty files as you'll dereference nullptr by &bytes[0]
@paxos1977> stating on which systems your program is defined to be correct is up to you. As is, it relies on guarantees that are not provided by C++, and as such is wrong. If it works on a known set of implementations that do provide such guarantees (as in: documented as guarantees, not mere "it happens to look okay today on that version I have around"), then make that explicit, otherwise it's misleading.
Perfect reasoning for building brittle codebases that break unexpectedly because whatever behavior I observed one day was "portable enough". Until someone changed it. It's not like we have a history of over and over again. Proper engineering is done by building upon guarantees, not probing whatever seems to work now and hope for the best. Thus: this code is only sound engineering one implementations where its assumptions are guaranteed. [note: I did not talk about whether it happens to work or not today, that is irrelevant]
|
55

If you have C++17 (std::filesystem), there is also this way (which gets the file's size through std::filesystem::file_size instead of seekg and tellg):

#include <filesystem> #include <fstream> #include <string> namespace fs = std::filesystem; std::string readFile(fs::path path) { // Open the stream to 'lock' the file. std::ifstream f(path, std::ios::in | std::ios::binary); // Obtain the size of the file. const auto sz = fs::file_size(path); // Create a buffer. std::string result(sz, '\0'); // Read the whole file into the buffer. f.read(result.data(), sz); return result; } 

Note: you may need to use <experimental/filesystem> and std::experimental::filesystem if your standard library doesn't yet fully support C++17. You might also need to replace result.data() with &result[0] if it doesn't support non-const std::basic_string data.

9 Comments

This may cause undefined behaviour; opening the file in text mode yields a different stream than the disk file on some operating systems.
Originally developed as boost::filesystem so you can also use boost if you don't have c++17
Opening a file with one API and getting its size with another seems to be asking for inconsistency and race conditions.
What's the advantage of using std::filesystem::file_size instead of seekg and tellg?
This answer got my vote. It's modern and does exactly what I need.
|
28

Use

#include <iostream> #include <sstream> #include <fstream> int main() { std::ifstream input("file.txt"); std::stringstream sstr; while(input >> sstr.rdbuf()); std::cout << sstr.str() << std::endl; } 

or something very close. I don't have a stdlib reference open to double-check myself.

Yes, I understand I didn't write the slurp function as asked.

3 Comments

This looks nice, but it doesn't compile. Changes to make it compile reduce it to other answers on this page. ideone.com/EyhfWm
Why the while loop?
Agreed. When operator>> reads into a std::basic_streambuf, it will consume (what's left of) the input stream, so the loop is unnecessary.
21

I do not have enough reputation to comment directly on responses using tellg().

Please be aware that tellg() can return -1 on error. If you're passing the result of tellg() as an allocation parameter, you should sanity check the result first.

An example of the problem:

... std::streamsize size = file.tellg(); std::vector<char> buffer(size); ... 

In the above example, if tellg() encounters an error it will return -1. Implicit casting between signed (ie the result of tellg()) and unsigned (ie the arg to the vector<char> constructor) will result in a your vector erroneously allocating a very large number of bytes. (Probably 4294967295 bytes, or 4GB.)

Modifying paxos1977's answer to account for the above:

string readFile2(const string &fileName) { ifstream ifs(fileName.c_str(), ios::in | ios::binary | ios::ate); ifstream::pos_type fileSize = ifs.tellg(); if (fileSize < 0) <--- ADDED return std::string(); <--- ADDED ifs.seekg(0, ios::beg); vector<char> bytes(fileSize); ifs.read(&bytes[0], fileSize); return string(&bytes[0], fileSize); } 

1 Comment

Not only that, but tellg() does not return the size but a token. Many systems use a byte offset as a token, but this is not guaranteed, and some systems do not. Check this answer for an example.
11

Since this seems like a widely used utility, my approach would be to search for and to prefer already available libraries to hand made solutions, especially if boost libraries are already linked(linker flags -lboost_system -lboost_filesystem) in your project. Here (and older boost versions too), boost provides a load_string_file utility:

#include <iostream> #include <string> #include <boost/filesystem/string_file.hpp> int main() { std::string result; boost::filesystem::load_string_file("aFileName.xyz", result); std::cout << result.size() << std::endl; } 

As an advantage, this function doesn't seek an entire file to determine the size, instead uses stat() internally. As a possibly negligible disadvantage though, one could easily infer upon inspection of the source code: string is unnecessarily resized with '\0' character which are rewritten by the file contents.

2 Comments

I believe string_file is depreciated in boost v1.79
And removed in boost 1.84
9

This solution adds error checking to the rdbuf()-based method.

std::string file_to_string(const std::string& file_name) { std::ifstream file_stream{file_name}; if (file_stream.fail()) { // Error opening file. } std::ostringstream str_stream{}; file_stream >> str_stream.rdbuf(); // NOT str_stream << file_stream.rdbuf() if (file_stream.fail() && !file_stream.eof()) { // Error reading file. } return str_stream.str(); } 

I'm adding this answer because adding error-checking to the original method is not as trivial as you'd expect. The original method uses stringstream's insertion operator (str_stream << file_stream.rdbuf()). The problem is that this sets the stringstream's failbit when no characters are inserted. That can be due to an error or it can be due to the file being empty. If you check for failures by inspecting the failbit, you'll encounter a false positive when you read an empty file. How do you disambiguate legitimate failure to insert any characters and "failure" to insert any characters because the file is empty?

You might think to explicitly check for an empty file, but that's more code and associated error checking.

Checking for the failure condition str_stream.fail() && !str_stream.eof() doesn't work, because the insertion operation doesn't set the eofbit (on the ostringstream nor the ifstream).

So, the solution is to change the operation. Instead of using ostringstream's insertion operator (<<), use ifstream's extraction operator (>>), which does set the eofbit. Then check for the failiure condition file_stream.fail() && !file_stream.eof().

Importantly, when file_stream >> str_stream.rdbuf() encounters a legitimate failure, it shouldn't ever set eofbit (according to my understanding of the specification). That means the above check is sufficient to detect legitimate failures.

Comments

6

Something like this shouldn't be too bad:

void slurp(std::string& data, const std::string& filename, bool is_binary) { std::ios_base::openmode openmode = ios::ate | ios::in; if (is_binary) openmode |= ios::binary; ifstream file(filename.c_str(), openmode); data.clear(); data.reserve(file.tellg()); file.seekg(0, ios::beg); data.append(istreambuf_iterator<char>(file.rdbuf()), istreambuf_iterator<char>()); } 

The advantage here is that we do the reserve first so we won't have to grow the string as we read things in. The disadvantage is that we do it char by char. A smarter version could grab the whole read buf and then call underflow.

1 Comment

You should checkout the version of this code that uses std::vector for the initial read rather than a string. Much much faster.
6

Here's a version using the new filesystem library with reasonably robust error checking:

#include <cstdint> #include <exception> #include <filesystem> #include <fstream> #include <sstream> #include <string> namespace fs = std::filesystem; std::string loadFile(const char *const name); std::string loadFile(const std::string &name); std::string loadFile(const char *const name) { fs::path filepath(fs::absolute(fs::path(name))); std::uintmax_t fsize; if (fs::exists(filepath)) { fsize = fs::file_size(filepath); } else { throw(std::invalid_argument("File not found: " + filepath.string())); } std::ifstream infile; infile.exceptions(std::ifstream::failbit | std::ifstream::badbit); try { infile.open(filepath.c_str(), std::ios::in | std::ifstream::binary); } catch (...) { std::throw_with_nested(std::runtime_error("Can't open input file " + filepath.string())); } std::string fileStr; try { fileStr.resize(fsize); } catch (...) { std::stringstream err; err << "Can't resize to " << fsize << " bytes"; std::throw_with_nested(std::runtime_error(err.str())); } infile.read(fileStr.data(), fsize); infile.close(); return fileStr; } std::string loadFile(const std::string &name) { return loadFile(name.c_str()); }; 

5 Comments

infile.open can also accept std::string without converting with .c_str()
filepath isn't a std::string, it's a std::filesystem::path. Turns out std::ifstream::open can accept one of those as well.
@DavidG, std::filesystem::path is implicitly convertible to std::string
According to cppreference.com, the ::open member function on std::ifstream that accepts std::filesystem::path operates as if the ::c_str() method were called on the path. The underlying ::value_type of paths is char under POSIX.
note that the file can change between file_size and open
3

I know this is a positively ancient question with a plethora of answers, but not one of them mentions what I would have considered the most obvious way to do this. Yes, I know this is C++, and using libc is evil and wrong or whatever, but nuts to that. Using libc is fine, especially for such a simple thing as this.

Essentially: just open the file, get it's size (not necessarily in that order), and read it.

#include <string> #include <cstdio> #include <cstdlib> #include <cstring> #include <sys/stat.h> static constexpr char filename[] = "foo.bar"; int main(void) { FILE *fp = ::fopen(filename, "rb"); if (!fp) { ::perror("fopen"); ::exit(1); } // Stat isn't strictly part of the standard C library, // but it's in every libc I've ever seen for a hosted system. struct stat st; if (::fstat(::fileno(fp), &st) == (-1)) { ::perror("fstat"); ::exit(1); } // You could simply allocate a buffer here and use std::string_view, or // even allocate a buffer and copy it to a std::string. Creating a // std::string and setting its size is simplest, but will pointlessly // initialize the buffer to 0. You can't win sometimes. std::string str; str.reserve(st.st_size + 1U); str.resize(st.st_size); ::fread(str.data(), 1, st.st_size, fp); str[st.st_size] = '\0'; ::fclose(fp); } 

This doesn't really seem worse than some of the other solutions, in addition to being (in practice) completely portable. One could also throw an exception instead of exiting immediately, of course. It seriously irritates me that resizing the std::string always 0 initializes it, but it can't be helped.

PLEASE NOTE that this is only going to work as written for C++17 and later. Earlier versions (ought to) disallow editing std::string::data(). If working with an earlier version consider replacing str.data() with &str[0].

2 Comments

What's the std::string?
@einpoklum I figured the op would be able make one themselves from the C string. On reflection you're probably right, the question specifically asked how to read it into a std::string. Updated.
2

You can use the 'std::getline' function, and specify 'eof' as the delimiter. The resulting code is a little bit obscure though:

std::string data; std::ifstream in( "test.txt" ); std::getline( in, data, std::string::traits_type::to_char_type( std::string::traits_type::eof() ) ); 

2 Comments

I just tested this, it appears to be much slower than getting the file size and calling read for the whole file size into a buffer. On the order of 12x slower.
This will only work, as long as there are no "eof" (e.g. 0x00, 0xff, ...) characters in your file. If there are, you will only read part of the file.
1
#include <string> #include <sstream> using namespace std; string GetStreamAsString(const istream& in) { stringstream out; out << in.rdbuf(); return out.str(); } string GetFileAsString(static string& filePath) { ifstream stream; try { // Set to throw on failure stream.exceptions(fstream::failbit | fstream::badbit); stream.open(filePath); } catch (system_error& error) { cerr << "Failed to open '" << filePath << "'\n" << error.code().message() << endl; return "Open fail"; } return GetStreamAsString(stream); } 

usage:

const string logAsString = GetFileAsString(logFilePath); 

Comments

1

For performance I haven't found anything faster than the code below.

std::string readAllText(std::string const &path) { assert(path.c_str() != NULL); FILE *stream = fopen(path.c_str(), "r"); assert(stream != NULL); fseek(stream, 0, SEEK_END); long stream_size = ftell(stream); fseek(stream, 0, SEEK_SET); void *buffer = malloc(stream_size); fread(buffer, stream_size, 1, stream); assert(ferror(stream) == 0); fclose(stream); std::string text((const char *)buffer, stream_size); assert(buffer != NULL); free((void *)buffer); return text; } 

2 Comments

This can certainly be sped up faster. For one thing, use rb (binary) mode instead of r (text) mode. And get rid of malloc(), you don't need it. You can resize() a std::string and then fread() directly into its memory buffer. No need to malloc() a buffer and then copy it into a std::string.
@RemyLebeau resize() does pointlessly 0 initialize the memory though. Still faster than a full copy, of course, but pointless all the same. As to this post: using an assertion to check the result of fopen() is straight up Evil and Wrong. It must ALWAYS be checked, not only in a debug build. With this implementation a simple typo would cause undefined behavior (sure, in practice a segfault, but that's hardly the point).
1

Pulling info from several places... This should be the fastest and best way:

#include <filesystem> #include <fstream> #include <string> //Returns true if successful. bool readInFile(std::string pathString) { //Make sure the file exists and is an actual file. if (!std::filesystem::is_regular_file(pathString)) { return false; } //Convert relative path to absolute path. pathString = std::filesystem::weakly_canonical(pathString); //Open the file for reading (binary is fastest). std::wifstream in(pathString, std::ios::binary); //Make sure the file opened. if (!in) { return false; } //Wide string to store the file's contents. std::wstring fileContents; //Jump to the end of the file to determine the file size. in.seekg(0, std::ios::end); //Resize the wide string to be able to fit the entire file (Note: Do not use reserve()!). fileContents.resize(in.tellg()); //Go back to the beginning of the file to start reading. in.seekg(0, std::ios::beg); //Read the entire file's contents into the wide string. in.read(fileContents.data(), fileContents.size()); //Close the file. in.close(); //Do whatever you want with the file contents. std::wcout << fileContents << L" " << fileContents.size(); return true; } 

This reads in wide characters into a std::wstring, but you can easily adapt if you just want regular characters and a std::string.

Comments

0

You can use the rst C++ library that I developed to do that:

#include "rst/files/file_utils.h" std::filesystem::path path = ...; // Path to a file. rst::StatusOr<std::string> content = rst::ReadFile(path); if (content.err()) { // Handle error. } std::cout << *content << ", " << content->size() << std::endl; 

Comments

0

For a small to medium sized file I use these methods which are quite fast. The one returning string can be used to "convert" the byte array to string.

auto read_file_bytes(std::string_view filepath) -> std::vector<std::byte> { std::ifstream ifs(filepath.data(), std::ios::binary | std::ios::ate); if (!ifs) throw std::ios_base::failure("File does not exist"); auto end = ifs.tellg(); ifs.seekg(0, std::ios::beg); auto size = std::size_t(end - ifs.tellg()); if (size == 0) // avoid undefined behavior return {}; std::vector<std::byte> buffer(size); if (!ifs.read((char *) buffer.data(), buffer.size())) throw std::ios_base::failure("Read error"); return buffer; } auto read_file_string(std::string_view filepath) -> std::string { auto bytes = read_file_bytes(filepath); return std::string(reinterpret_cast<char *>(bytes.begin().base()), bytes.size()); } 

1 Comment

bytes.begin().base() should be bytes.data() instead. You could alternatively use the std::string constructor that accepts iterators, eg: return std::string(bytes.begin(), bytes.end());. But, I would suggest re-writing read_file_string() to not rely on read_file_bytes() at all. You should just use the same reading logic but read the file data directly into the target std::string rather than into an intermediate std::vector that is then converted into a std::string. That would be more efficient, and won't need to store two copies of the file data in memory, if only briefly.
0

this is the function i use, and when dealing with large files (1GB+) for some reason std::ifstream::read() is much faster than std::ifstream::rdbuf() when you know the filesize, so the whole "check filesize first" thing is actually a speed optimization

#include <string> #include <fstream> #include <sstream> std::string file_get_contents(const std::string &filename) { std::ifstream file(filename, std::ifstream::binary); file.exceptions(std::ifstream::failbit | std::ifstream::badbit); file.seekg(0, std::istream::end); const std::streampos ssize = file.tellg(); if (ssize < 0) { // Can't get size for some reason, fallback to slower "just read everything" // because I don't trust that we could seek back/forth in the original stream, // I'm creating a new stream. std::ifstream file(filename, std::ifstream::binary); file.exceptions(std::ifstream::failbit | std::ifstream::badbit); std::ostringstream ss; ss << file.rdbuf(); return ss.str(); } file.seekg(0, std::istream::beg); std::string result(size_t(ssize), 0); file.read(&result[0], std::streamsize(ssize)); return result; } 

4 Comments

std::string result(size_t(ssize), 0); fills the string with the char 0 (null or \0), this may be considered "unnecessary overhead", as per the OP's question
@MarcheRemi indeed, its basically like using calloc() when all you need is malloc(). that said, creating a string of uninitialized bytes is really hard - i think you can supply a custom allocator to actually make it happen, but seems nobody has figured out exactly how yet.
What's with the dollar sign?
@einpoklum that was... inherited from php's api, it's a (incomplete) port of php's file_get_contents
-1

Never write into the std::string's const char * buffer. Never ever! Doing so is a massive mistake.

Reserve() space for the whole string in your std::string, read chunks from your file of reasonable size into a buffer, and append() it. How large the chunks have to be depends on your input file size. I'm pretty sure all other portable and STL-compliant mechanisms will do the same (yet may look prettier).

3 Comments

Since C++11 it is guaranteed to be OK to write directly into the std::string buffer; and I believe that it did work correctly on all actual implementations prior to that
Since C++17 we even have non-const std::string::data() method for modifying string buffer directly without resorting to tricks like &str[0].
Agreed with @zett42 this answer is factually incorrect
-1

An updated function which builds upon CTT's solution:

#include <string> #include <fstream> #include <limits> #include <string_view> std::string readfile(const std::string_view path, bool binaryMode = true) { std::ios::openmode openmode = std::ios::in; if(binaryMode) { openmode |= std::ios::binary; } std::ifstream ifs(path.data(), openmode); ifs.ignore(std::numeric_limits<std::streamsize>::max()); std::string data(ifs.gcount(), 0); ifs.seekg(0); ifs.read(data.data(), data.size()); return data; } 

There are two important differences:

tellg() is not guaranteed to return the offset in bytes since the beginning of the file. Instead, as Puzomor Croatia pointed out, it's more of a token which can be used within the fstream calls. gcount() however does return the amount of unformatted bytes last extracted. We therefore open the file, extract and discard all of its contents with ignore() to get the size of the file, and construct the output string based on that.

Secondly, we avoid having to copy the data of the file from a std::vector<char> to a std::string by writing to the string directly.

In terms of performance, this should be the absolute fastest, allocating the appropriate sized string ahead of time and calling read() once. As an interesting fact, using ignore() and countg() instead of ate and tellg() on gcc compiles down to almost the same thing, bit by bit.

3 Comments

This code does not work, I'm getting empty string. I think you wanted ifs.seekg(0) instead of ifs.clear() (then it works).
std::string::data() returns const char* before C++17.
path.data() may be not null-terminated as it comes from string_view.
-1
#include <iostream> #include <fstream> #include <string.h> using namespace std; main(){ fstream file; //Open a file file.open("test.txt"); string copy,temp; //While loop to store whole document in copy string //Temp reads a complete line //Loop stops until temp reads the last line of document while(getline(file,temp)){ //add new line text in copy copy+=temp; //adds a new line copy+="\n"; } //Display whole document cout<<copy; //close the document file.close(); } 

4 Comments

Please add the description.
please visit and check how to answer a question.
This is if you want to store it into a string. I would have added some description if the queue was not full.
The copy is a string variable saving whole text using in the code, you can assign them to another variable.
-1
#include <string> #include <fstream> int main() { std::string fileLocation = "C:\\Users\\User\\Desktop\\file.txt"; std::ifstream file(fileLocation, std::ios::in | std::ios::binary); std::string data; if(file.is_open()) { std::getline(file, data, '\0'); file.close(); } } 

1 Comment

Seems to be a variant of Martin Cote's 2008 answer, which uses EOF? (And the same caveats apply as those written in the comments on that answer.) Please also try to provide more information than a block of code, see How do I write a good answer?.
-1
std::string get(std::string_view const& fn) { struct filebuf: std::filebuf { using std::filebuf::egptr; using std::filebuf::gptr; using std::filebuf::gbump; using std::filebuf::underflow; }; std::string r; if (filebuf fb; fb.open(fn.data(), std::ios::binary | std::ios::in)) { r.reserve(fb.pubseekoff({}, std::ios::end)); fb.pubseekpos({}); while (filebuf::traits_type::eof() != fb.underflow()) { auto const gptr(fb.gptr()); auto const sz(fb.egptr() - gptr); fb.gbump(sz); r.append(gptr, sz); } } return r; } 

Comments

-2

I know that I am late to the party, but now (2021) on my machine, this is the fastest implementation that I have tested:

#include <fstream> #include <string> bool fileRead( std::string &contents, const std::string &path ) { contents.clear(); if( path.empty()) { return false; } std::ifstream stream( path ); if( !stream ) { return false; } stream >> contents; return true; } 

1 Comment

… how did you test?! Because this is certainly not the fastest implementation, and it doesn’t read the entire file.