Skip to main content
deleted 119 characters in body
Source Link
HectorOfTroy407
  • 1.9k
  • 5
  • 24
  • 31

I'm getting a ChunkedEncodingError(e) using Python requests. I'm using the following to rip down JSON:

r = requests.get(url, headers=auth, stream=True) 

And the iterating over each line, using the carriage return as a delimiter, which is how this API distinguishes between distinct JSON events.

for d in r.iter_lines(delimiter="\n"): d += "\n" sock.send(d) 

I'm delimiting on the carriage return and then adding it back in as the endpoint I'm pushing the logs to actually expects a carriage return at the end of each event also. This seems to work for roughly 100k log files. When I try to make a larger call I'll get this following thrown:

for d in r.iter_lines(delimiter="\n"): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 783, in iter_lines logs_1 | for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 742, in generate logs_1 | raise ChunkedEncodingError(e) logs_1 | requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read)) 

What could I do to prevent this? If I place everything in a try/catch looking for this exception can I just skip to the next iter_line?

UPDATE: one working theory is that the iter_lines is getting ahead of the content coming down from r. When iter_lines passes where r is, it will read 0 bytes and throw this error. How can I ensure that iter_lines doesn't blow past r?

UPDATE 2: I've discovered the API is sending back a NoneType at some point as well. So how can I account for this null byte somewhere in the response without blowing everything up? Each individual event is ended with a \n, and I need to be able to inspect each even individually. Should I chunk the content instead of iter_lines? Then ensure there is no NoneType in the chunk? That way I don't try to iter_lines over a NoneType and it blows up?

I'm getting a ChunkedEncodingError(e) using Python requests. I'm using the following to rip down JSON:

r = requests.get(url, headers=auth, stream=True) 

And the iterating over each line, using the carriage return as a delimiter, which is how this API distinguishes between distinct JSON events.

for d in r.iter_lines(delimiter="\n"): d += "\n" sock.send(d) 

I'm delimiting on the carriage return and then adding it back in as the endpoint I'm pushing the logs to actually expects a carriage return at the end of each event also. This seems to work for roughly 100k log files. When I try to make a larger call I'll get this following thrown:

for d in r.iter_lines(delimiter="\n"): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 783, in iter_lines logs_1 | for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 742, in generate logs_1 | raise ChunkedEncodingError(e) logs_1 | requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read)) 

What could I do to prevent this? If I place everything in a try/catch looking for this exception can I just skip to the next iter_line?

UPDATE: one working theory is that the iter_lines is getting ahead of the content coming down from r. When iter_lines passes where r is, it will read 0 bytes and throw this error. How can I ensure that iter_lines doesn't blow past r?

UPDATE 2: I've discovered the API is sending back a NoneType at some point as well. So how can I account for this null byte somewhere in the response without blowing everything up?

I'm getting a ChunkedEncodingError(e) using Python requests. I'm using the following to rip down JSON:

r = requests.get(url, headers=auth, stream=True) 

And the iterating over each line, using the carriage return as a delimiter, which is how this API distinguishes between distinct JSON events.

for d in r.iter_lines(delimiter="\n"): d += "\n" sock.send(d) 

I'm delimiting on the carriage return and then adding it back in as the endpoint I'm pushing the logs to actually expects a carriage return at the end of each event also. This seems to work for roughly 100k log files. When I try to make a larger call I'll get this following thrown:

for d in r.iter_lines(delimiter="\n"): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 783, in iter_lines logs_1 | for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 742, in generate logs_1 | raise ChunkedEncodingError(e) logs_1 | requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read)) 

UPDATE: I've discovered the API is sending back a NoneType at some point as well. So how can I account for this null byte somewhere in the response without blowing everything up? Each individual event is ended with a \n, and I need to be able to inspect each even individually. Should I chunk the content instead of iter_lines? Then ensure there is no NoneType in the chunk? That way I don't try to iter_lines over a NoneType and it blows up?

added 186 characters in body
Source Link
HectorOfTroy407
  • 1.9k
  • 5
  • 24
  • 31

I'm getting a ChunkedEncodingError(e) using Python requests. I'm using the following to rip down JSON:

r = requests.get(url, headers=auth, stream=True) 

And the iterating over each line, using the carriage return as a delimiter, which is how this API distinguishes between distinct JSON events.

for d in r.iter_lines(delimiter="\n"): d += "\n" sock.send(d) 

I'm delimiting on the carriage return and then adding it back in as the endpoint I'm pushing the logs to actually expects a carriage return at the end of each event also. This seems to work for roughly 100k log files. When I try to make a larger call I'll get this following thrown:

for d in r.iter_lines(delimiter="\n"): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 783, in iter_lines logs_1 | for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 742, in generate logs_1 | raise ChunkedEncodingError(e) logs_1 | requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read)) 

What could I do to prevent this? If I place everything in a try/catch looking for this exception can I just skip to the next iter_line?

UPDATE: one working theory is that the iter_lines is getting ahead of the content coming down from r. When iter_lines passes where r is, it will read 0 bytes and throw this error. How can I ensure that iter_lines doesn't blow past r?

UPDATE 2: I've discovered the API is sending back a NoneType at some point as well. So how can I account for this null byte somewhere in the response without blowing everything up?

I'm getting a ChunkedEncodingError(e) using Python requests. I'm using the following to rip down JSON:

r = requests.get(url, headers=auth, stream=True) 

And the iterating over each line, using the carriage return as a delimiter, which is how this API distinguishes between distinct JSON events.

for d in r.iter_lines(delimiter="\n"): d += "\n" sock.send(d) 

I'm delimiting on the carriage return and then adding it back in as the endpoint I'm pushing the logs to actually expects a carriage return at the end of each event also. This seems to work for roughly 100k log files. When I try to make a larger call I'll get this following thrown:

for d in r.iter_lines(delimiter="\n"): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 783, in iter_lines logs_1 | for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 742, in generate logs_1 | raise ChunkedEncodingError(e) logs_1 | requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read)) 

What could I do to prevent this? If I place everything in a try/catch looking for this exception can I just skip to the next iter_line?

UPDATE: one working theory is that the iter_lines is getting ahead of the content coming down from r. When iter_lines passes where r is, it will read 0 bytes and throw this error. How can I ensure that iter_lines doesn't blow past r?

I'm getting a ChunkedEncodingError(e) using Python requests. I'm using the following to rip down JSON:

r = requests.get(url, headers=auth, stream=True) 

And the iterating over each line, using the carriage return as a delimiter, which is how this API distinguishes between distinct JSON events.

for d in r.iter_lines(delimiter="\n"): d += "\n" sock.send(d) 

I'm delimiting on the carriage return and then adding it back in as the endpoint I'm pushing the logs to actually expects a carriage return at the end of each event also. This seems to work for roughly 100k log files. When I try to make a larger call I'll get this following thrown:

for d in r.iter_lines(delimiter="\n"): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 783, in iter_lines logs_1 | for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 742, in generate logs_1 | raise ChunkedEncodingError(e) logs_1 | requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read)) 

What could I do to prevent this? If I place everything in a try/catch looking for this exception can I just skip to the next iter_line?

UPDATE: one working theory is that the iter_lines is getting ahead of the content coming down from r. When iter_lines passes where r is, it will read 0 bytes and throw this error. How can I ensure that iter_lines doesn't blow past r?

UPDATE 2: I've discovered the API is sending back a NoneType at some point as well. So how can I account for this null byte somewhere in the response without blowing everything up?

added 183 characters in body
Source Link
HectorOfTroy407
  • 1.9k
  • 5
  • 24
  • 31

I'm getting a ChunkedEncodingError(e) using Python requests. I'm using the following to rip down JSON:

r = requests.get(url, headers=auth, stream=True) 

And the iterating over each line, using the carriage return as a delimiter, which is how this API distinguishes between distinct JSON events.

for d in r.iter_lines(delimiter="\n"): d += "\n" sock.send(d) 

I'm delimiting on the carriage return and then adding it back in as the endpoint I'm pushing the logs to actually expects a carriage return at the end of each event also. This seems to work for roughly 100k log files. When I try to make a larger call I'll get this following thrown:

for d in r.iter_lines(delimiter="\n"): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 783, in iter_lines logs_1 | for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 742, in generate logs_1 | raise ChunkedEncodingError(e) logs_1 | requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read)) 

What could I do to prevent this? If I place everything in a try/catch looking for this exception can I just skip to the next iter_line? Not easy to find documentation on this.... Anything helps

UPDATE: one working theory is that the iter_lines is getting ahead of the content coming down from r. ThanksWhen iter_lines passes where r is, it will read 0 bytes and throw this error. How can I ensure that iter_lines doesn't blow past r?

I'm getting a ChunkedEncodingError(e) using Python requests. I'm using the following to rip down JSON:

r = requests.get(url, headers=auth, stream=True) 

And the iterating over each line, using the carriage return as a delimiter, which is how this API distinguishes between distinct JSON events.

for d in r.iter_lines(delimiter="\n"): d += "\n" sock.send(d) 

I'm delimiting on the carriage return and then adding it back in as the endpoint I'm pushing the logs to actually expects a carriage return at the end of each event also. This seems to work for roughly 100k log files. When I try to make a larger call I'll get this following thrown:

for d in r.iter_lines(delimiter="\n"): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 783, in iter_lines logs_1 | for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 742, in generate logs_1 | raise ChunkedEncodingError(e) logs_1 | requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read)) 

What could I do to prevent this? If I place everything in a try/catch looking for this exception can I just skip to the next iter_line? Not easy to find documentation on this.... Anything helps. Thanks.

I'm getting a ChunkedEncodingError(e) using Python requests. I'm using the following to rip down JSON:

r = requests.get(url, headers=auth, stream=True) 

And the iterating over each line, using the carriage return as a delimiter, which is how this API distinguishes between distinct JSON events.

for d in r.iter_lines(delimiter="\n"): d += "\n" sock.send(d) 

I'm delimiting on the carriage return and then adding it back in as the endpoint I'm pushing the logs to actually expects a carriage return at the end of each event also. This seems to work for roughly 100k log files. When I try to make a larger call I'll get this following thrown:

for d in r.iter_lines(delimiter="\n"): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 783, in iter_lines logs_1 | for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode): logs_1 | File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 742, in generate logs_1 | raise ChunkedEncodingError(e) logs_1 | requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read)) 

What could I do to prevent this? If I place everything in a try/catch looking for this exception can I just skip to the next iter_line?

UPDATE: one working theory is that the iter_lines is getting ahead of the content coming down from r. When iter_lines passes where r is, it will read 0 bytes and throw this error. How can I ensure that iter_lines doesn't blow past r?

Source Link
HectorOfTroy407
  • 1.9k
  • 5
  • 24
  • 31
Loading