342

I've got quite interesting problem. I tried to send some projects via bash to repo and recently there was a problem with sending it.

Enumerating objects: 27, done. Counting objects: 100% (27/27), done. Delta compression using up to 16 threads Compressing objects: 100% (24/24), done. Writing objects: 100% (25/25), 187.79 KiB | 9.39 MiB/s, done. Total 25 (delta 1), reused 0 (delta 0), pack-reused 0 send-pack: unexpected disconnect while reading sideband packet fatal: the remote end hung up unexpectedly 

The funny part is that 10 min earlier I can send it without any problems.

I tried with getting new repo, creating new file, reinstalling git, git config --global http.postBuffer 524288000 with bigger numbers as well, also https.postBuffer and so on. Also install desktop version the same issue come in.

I've got problems mostly with React apps.

Anyone know the solution ? What could go wrong ?

13
  • 7
    Still does not find solution. Tried everything, even stupid things like reinstalling system. Commented Mar 26, 2021 at 9:20
  • 1
    You could try unstaging the file that you have just committed which is creating problem for the push. The problem for me was two png files. I unstaged them and the problem solved. Commented Mar 26, 2021 at 16:10
  • 3
    I tried that also, unfortunately it does not work also Commented Mar 28, 2021 at 15:49
  • stackoverflow.com/questions/21277806/… - also this Commented Sep 29, 2021 at 15:44
  • WTF. This problem is really annoying. GIT is supposed to work fine for huge projects with thousands of contributors. Yet I cannot git clone my repo from the server across the room. Worse, it waits until 100% "completion" until it fails, and you need to start from scratch. Commented Apr 20, 2023 at 12:22

39 Answers 39

289

It might be your network issue. If the network is too slow, then it might disconnect the connection unexpectedly.

If you have good internet and are still getting this message, then it might be an issue with your post buffer. Use this command to increase it (for example) to 150 MiB:

git config --global http.postBuffer 157286400 

According to git-config Documentation for http.postBuffer:

Maximum size in bytes of the buffer used by smart HTTP transports when POSTing data to the remote system. For requests larger than this buffer size, HTTP/1.1 and Transfer-Encoding: chunked is used to avoid creating a massive pack file locally. Default is 1 MiB, which is sufficient for most requests.

Note that raising this limit is only effective for disabling chunked transfer encoding and therefore should be used only where the remote server or a proxy only supports HTTP/1.0 or is noncompliant with the HTTP standard. Raising this is not, in general, an effective solution for most push problems, but can increase memory consumption significantly since the entire buffer is allocated even for small pushes.

So this is only a mitigation in cases where the server is having issues. This is most likely not going to fix push problems to GitHub or GitLab.com.

Sign up to request clarification or add additional context in comments.

12 Comments

Doesn't resolve the issue for me . Any comment on what the effect of this should be and why this number?
Yeah, a command with such an obscure and specific argument should be explained and understood.
The quoted text says "this is not, in general, an effective solution" but of the four times I have encountered issues while pushing (large... should probably have used git-lfs instead but too late now) repositories this fixed it
For my case, I had too many images that I had to push to the repo. Increasing buffer size worked for me.
Thanks ! Solved this issue on a macbook pro git Xcode git.
|
241

First of all, check your network connection stability.

If there is no problem with network connection try another solution; it may work:

On Linux

Execute the following in the command line before executing the Git command:

export GIT_TRACE_PACKET=1 export GIT_TRACE=1 export GIT_CURL_VERBOSE=1 

On Windows

Execute the following in the command line before executing the Git command:

set GIT_TRACE_PACKET=1 set GIT_TRACE=1 set GIT_CURL_VERBOSE=1 

In addition:

git config --global core.compression 0 git clone --depth 1 <repo_URI> # cd to your newly created directory git fetch --unshallow git pull --all 

For PowerShell users:

As kodybrown said in comments:

$env:GIT_TRACE_PACKET=1 $env:GIT_TRACE=1 $env:GIT_CURL_VERBOSE=1 

24 Comments

I started experiencing send-pack: unexpected disconnect while reading sideband packet after a push. Repeated attempts would result in high CPU utilisation and finally it would just timeout. I set these three vars and suddenly it just worked. What could be going on here?
what is the proper way to reverse these changes and can you give a little bit of explanation about what do export GIT_TRACE_PACKET=1 and rest do?
git config --global core.compression 0 works
how to revoke the changes?
To revert this, unset GIT_TRACE_PACKET GIT_TRACE GIT_CURL_VERBOSE worked for me.
|
95

None of the above worked for me, git config --global pack.window 1 did.

For what this does, see https://git-scm.com/docs/git-pack-objects in particular the --window argument:

"The objects are first internally sorted by type, size and optionally names and compared against the other objects within --window to see if using delta compression saves space."

So setting the window to 1 effectively disables delta compression (delta compression means only differences between objects are stored to save space when pushing commits)

7 Comments

Worked great for me as well. What exactly does this do?
@BillHart I think it means, only one git-object is transferred in the same time. it is default 10.
It didn't help for me. I still get same error :/
For what this does, see git-scm.com/docs/git-pack-objects in particular the --window argument: "The objects are first internally sorted by type, size and optionally names and compared against the other objects within --window to see if using delta compression saves space." So setting the window to 1 effectively disables delta compression (delta compression means only differences between objects are stored to save space when pushing commits)
For anyone visiting this answer- beware of all of the different answers and comments saying "this worked for me", because the nature of this issue is fairly random. It occurs sometimes and sometimes it doesn't, meaning lots of people get it to work once and think the problem is fixed, but it isn't. I've seen the exact opposite solutions being recommended in many places; e.g. people saying use compression level 9, others saying only level 0 worked, etc. It makes their solutions practically worthless, unfortunately
|
66

git config --global http.version HTTP/1.1

solved the problem for me. I was unable to push large commits in multiple different github repositories earlier.

9 Comments

How did you find this solution?
Are you kidding me? How the hell did that solve the issue...
Don't forget to set it back to HTTP/2.0 after the push
You also just set it back to default git config --global --unset http.version
Does anyone know why this works? This worked for me as well, despite everything else listed here not working: git pack limits, upgrading git, tracing git requests with environment variables, adjusting the http postBuffer size. But I really don't have any idea WHY this works, which makes it hard to determine when this is the right solution. I'm migrating 60 more repos in the next few days, so any advice is useful.
|
58

I had the same problem. I have a repo with 20000 files, the whole repo is about 5 GB in size, and some files are 10 MB in size. I could commit to the repo without problems and I could clone without problems (it took a while, though). Yet, every other time I pulled this repo to my machine I got

remote: Enumerating objects: 1359, done. remote: Counting objects: 100% (1359/1359), done. remote: Compressing objects: 100% (691/691), done. remote: Total 1221 (delta 530), reused 1221 (delta 530), pack-reused 0 fetch-pack: unexpected disconnect while reading sideband packet fatal: early EOF fatal: fetch-pack: invalid index-pack output 

What finally helped was this tip. Go to your user directory and edit .git/config and add:

[core] packedGitLimit = 512m packedGitWindowSize = 512m [pack] deltaCacheSize = 2047m packSizeLimit = 2047m windowMemory = 2047m 

Voilá. No more errors.

10 Comments

I had to augment packedGitLimit, packedGitWindowSize to 1024m. (For reference, I don't have any huge files, but a lot of small ones.)
This worked for me, but only after disconnecting my VPN!
@GregarityNow how to setup the above configuration in jenkins, I am trying to pull the 2GB+ size from gitlab
This solution worked for me as well. PS: none other worked.
This was the only fix that worked for me
|
34

None of the above worked for me. But this is worked for me:

Let's try to resolve the issue by increasing buffer:

git config --global http.postBuffer 52428800 

Turn off the compression

git config --global core.compression 0 

Try the workaround here:

$ git clone --depth 1 https://innersource.%2A.com/*repoName.git $ cd repository $ git fetch --unshallow 

3 Comments

The key trick is --depth 1 and then unshallow
How setting a postBuffer size has a connection to cloning repository? That does not make any sense.
Please drop the --global unless it is necessary. For existing repos, use their configuration. For git clone, use --config ... to set configuration items for the new repo.
26

Connection, VPN, Antivirus, Firewall

Most probable reason. Especially if your git is up to date (if not you can update it).

The first thing to try is to check your connection and that it is stable.

My connection is excellent ===> wait Are you using VPN ?

=> Disable it. And try. (VPN's are big culprit for such a problem)

Still doesn't work? ==> Check for antivirus and firewalls.

  • Internet stable.
  • VPN VPN VPN => Big culprit
  • firewall and antivirus
  • git up to date

If that doesn’t work see the below:

VPN dynamic IP rotation is the culprit

  • If all works well after disabling the VPN. If you still need to use the VPN. Try to use a static IP instead. As mainly it makes a lot of sense. It's working working. Then bghghgh. It's VPN IP rotation with dynamic IP issue. (If you switch to static and it works for you. Please drop a comment. To sustain that statically.)

  • Experiment I led

    • Cloned [email protected]:microsoft/vscode.git repo with a VPN with a static IP, cloning at 100kib/s overage (took 2 hrs cloning +).
      • This repo is so large. It fails to clone if a VPN with a dynamic IP is used.
      • Result - Works fine with a VPN with static IP

Git cache, buffers and memory

https://stackoverflow.com/a/29355320/7668448

Open git global config:

git config --global -e 

And add those entries:

[core] packedGitLimit = 512m packedGitWindowSize = 512m [pack] deltaCacheSize = 2047m packSizeLimit = 2047m windowMemory = 2047m 

Try to clone again.

If that doesn't work! =>

You can try the partial fetch method and disabling compression:

https://stackoverflow.com/a/22317479/7668448

One command at a time

git config --global core.compression 0 git clone --depth 1 <repo_URI> git fetch --unshallow 

More details are on the link.

6 Comments

Why are VPNs a culprit for this? I'd prefer not to have to disconnect my VPN each time I want to push
I used the settings above but I had compression=0 already and git pull still gave the above error message. When I dropped the compression=0 line it worked.
@codeananda i just get to see your question. I wonder. Two things came to my mind after thinking about it. Either something to do with the rotation as this problem only occurs with large repositories. I'll make an experiment with disabling rotation. The second thing is may be something like the MTU notion ref here stackoverflow.com/a/61131561/7668448. It seems to be related to ssh ... I'll update the answer in case i find something solid and practical.
Update: git complains when I try to push big changes with my VPN on. If I push small changes (< 20 LOC) with a VPN on, it usually works fine
yea it's absolutely that. it only doesn't when the size is large. It always work with git and no problem with all repo. First time i encountered that it was the vscode repo. I contacted surfshark vpn support to ask just in case. Ii mentioned those details. Also counting to do the rotation test. Will update the answer if I get anything good.
|
14

If you are using SSH URLs, you can try the following, it worked for me the two times I had the same issue:

  1. Switch to HTTPS origin URL: git remote set-url origin https://github.com/<your_repo>
  2. Do the push. It should not fail now.
  3. Switch back to SSH: git remote set-url origin [email protected]:<your_repo>

I'm still not sure what is the cause of the issue. This is just a work around.

4 Comments

Have to resort to using this, when I'm encountering the same issue in TortoiseGit. Just clone the repository as HTTPS, then change the URL in the TortoiseGit settings.
For me, it resulted in a new error: The authenticity of host github.com can't be established.
I got the same error when cloning a huge GitHub repo (dotnet/docs), not pushing.Tried everything else and this is the only solution that worked! Thank you!
None of the other solutions presented here worked for me, but cloning the repo with SSH works just fine! SSH seems more "patient" than HTTP in this regard. Thanks!
14

On Windows 11, I solved this problem by upgrading build-in OpenSSH which is in C:\Windows\System32\OpenSSH folder.

You can download the latest binaries here: https://github.com/PowerShell/Win32-OpenSSH/releases

See #2012 in the release notes which solves the problem:

v9.2.2.0p1-Beta Latest This is a beta-release (non-production ready) This release includes: Security Fixes: Upgrade to LibreSSL 3.7.2. Please refer to https://ftp.openbsd.org/pub/OpenBSD/LibreSSL/libressl-3.7.2-relnotes.txt MSI: change inbound firewall rule that opens port 22 to apply to Private networks only Non-Security Fixes: Add U2F/Fido2 keys to the agent from other clients: #1961 - thanks @ddrown! Fix output codepage after executing scp/sftp/ssh/ssh-keygen command: #2027 - thanks @kemaruya! Fix early EOF termination when running git fetch over ssh: #2012 - thanks @cwgreene! Revert mark-of-the-web for SCP/SFTP file downloads: #2029 

5 Comments

This is the answer for Windows users, and the easiest way to update is winget install Microsoft.OpenSSH.Beta
No package found matching input
It's called Microsoft.OpenSSH.Preview now, so the command would be winget install -e --id Microsoft.OpenSSH.Preview
If you install git on Windows, and set the "use system ssh" checked, you need to do this.
Woaw, this solved the issue on Windows 11 after trying all the above... Many thanks !!!
11

I didn't want to believe it but after 3 failed clones, switching from a wifi connection (on Mac) to hardwired connection (on Linux) made it work first time. Not sure why!

https://serverfault.com/questions/1056419/git-wsl2-ssh-unexpected-disconnect-while-reading-sideband-packet/1060187#1060187

3 Comments

I switched from my 5G wireless network to my 2.4G network and it started working again for me
I switched from wired to wireless and the problem went away - very strange.
Switching from my 2.4G network to 5G fixed it for me - thanks for the idea @RussJackson.
10

Whatever you need:

 git config --global http.postBuffer 524288000 # Set a larger buffer size git config --global core.compression 0 # Disable compression 

3 Comments

Thank you for your interest in contributing to the Stack Overflow community. This question already has quite a few answers—including one that has been extensively validated by the community. Are you certain your approach hasn’t been given previously? If so, it would be useful to explain how your approach is different, under what circumstances your approach might be preferred, and/or why you think the previous answers aren’t sufficient. Can you kindly edit your answer to offer an explanation?
Thanks, this works for me on my current Linux box. git version on Linux is 2.43.2. I use gitea 1.21.5 on my server. On Windows runs an older git version 2.30.0.windows.1 where the problems until now not occur. There it works as expected without such changes.
Its works for me, but I removed the --global to be repository-specific # Set a larger buffer size git config --global http.postBuffer 524288000 # Disable compression git config --global core.compression 0
7

In my case, I had a few files that were over 100MB in size when trying to push my initial commit. Since GitHub apparently doesn't allow this, you get an error "unexpected disconnect while reading sideband packet fatal: the remote end hung up unexpectedly".

Using git rm was not enough, I had to start all over again with git init, git add, git commit and git push to resolve the issue.

1 Comment

No, you for sure didn't have to start all over again. Firstly, for such large files, there is Git LFS. Secondly, of course git rm will only create a new commit where the file is not. However, Git is a time machine, so the file is still there in the past and this can't fix your problems. Instead, you have to erase it from history if you want to get rid of it completely, which means interactive rebasing and amending the commit where you added the file so that you don't add it.
5

This is a problem sending bits over https protocol. You can configure git to use ssh instead.

First create an ssh-key on your workstation, then copy that key to Github to authenticate your computer without a password. Then configure git to use ssh protocol instead of https.

Create ssh key

No need to repeat Github's excellent instructions here.

Simple way

Use git protocol for the clone command:

git clone [email protected]:repo-name 
General method: Configure git to use ssh (git) protocol instead of https

This option is good if you are using GitHub Desktop and don't have direct control over the clone URL.

git config --global url."[email protected]:".insteadOf "https://github.com/" 

For Github enterprise, make your local translations for your instance rather than public github.

After this, you should be able clone without this error.

2 Comments

using url/insteadOf was the only thing that worked for me, tried all the other methods. Not only useful for Github Desktop but also when you fetch submodules and you don't want to change the submodules url's. Great answer.
perfect answer. forcing ssh connection is way more faster than https protocol. worked for me also.
4

There's a known issue in Git for Windows version 2.47.0.windows.1 that can lead to this error, and it can be resolved by installing a different version:

https://github.com/git-for-windows/git/issues/5199

Comments

3

For windows

set GIT_TRACE_PACKET=1 set GIT_TRACE=1 set GIT_CURL_VERBOSE=1 git init 

and then clone the project you need it worked for me

1 Comment

These instructions are when running cmd.exe on Windows, they won't work for Powershell. Also, they only enable diagnostics which should in theory not cause any functional changes.
3

All options above didn't solve the issue for me. I noticed that some helped and went past the initial limit, but still stopped halfway.

My issue was that, I was not able to fetch/clone/pull my repos due to some large files in the commit history that exceeded the 100mb mark. Also my slow internet made it impossible to fetch bad repos that still had those commit history to resolve the problem locally before pushing online.

Based on some of the comments that focused on the issue being a network/internet problem, I decided to focus on my SSH config and improve it for performance to see if it could help solve the problem. And Viola!!! Problem solved. I still maintained some of the configurations from previous suggestions, expecially the one related to the .git/config cache settings

[core] packedGitLimit = 512m packedGitWindowSize = 512m [pack] deltaCacheSize = 2047m packSizeLimit = 2047m windowMemory = 2047m 

This is how I updated by SSH confi at .ssh/config :

To improve Git download speed Linux, you can modify the SSH client configuration to optimize the SSH connection. Here are a few configuration options you can consider:

Use a faster SSH algorithm: By default, OpenSSH prioritizes the ssh-rsa algorithm for SSH connections. However, the ecdsa-sha2-nistp256 algorithm is generally faster. You can add the following line to your ~/.ssh/config file to use it:

Host * HostKeyAlgorithms ecdsa-sha2-nistp256,ssh-rsa 

Enable connection multiplexing: SSH connection multiplexing allows reusing existing SSH connections, which can significantly reduce the overhead of establishing new connections. Add the following lines to your ~/.ssh/config file to enable connection multiplexing:

 Host * ControlMaster auto ControlPath ~/.ssh/control:%h:%p:%r ControlPersist yes 

Increase SSH connection timeout: In case you have a slow or unstable network, increasing the SSH connection timeout can prevent premature connection terminations. Add the following line to your ~/.ssh/config file to increase the timeout to 60 seconds:

Host * ServerAliveInterval 60 

Use a faster cipher: You can try using a faster cipher algorithm to improve SSH performance. However, keep in mind that it may require the remote SSH server to support the chosen cipher. For example, to use the [email protected] cipher, add the following line to your ~/.ssh/config file:

Host * Ciphers [email protected] 

After making any changes to the SSH client configuration, save the file and try using Git again to see if there is an improvement in download speed.

After assessing all, I combined them and put this at the end of my .ssh/config file:

Host * HostKeyAlgorithms ecdsa-sha2-nistp256,ssh-rsa ControlMaster auto ControlPath ~/.ssh/control:%h:%p:%r ControlPersist yes ServerAliveInterval 60 Ciphers [email protected] 

Remember to exercise caution and be mindful of security implications when modifying SSH configuration options. Some options may require compatibility with the remote SSH server and its configuration.

Comments

3

I tried many of the suggestions above, but none worked. I finally upgraded my git client from 2.37 to 2.41 & the problem went away.

Note, I've been getting random "your connection was interrupted" so my PC seems to have issues, which is another story.

Comments

3

First, turn off compression:

git config --global core.compression 0 

Next, let's do a partial clone to truncate the amount of info coming down:

git clone --depth 1 <repo_URI> 

When that works, go into the new directory and retrieve the rest of the clone:

git fetch --unshallow 

or, alternately,

git fetch --depth=2147483647 

Now, do a regular pull:

git pull --all 

Comments

2

In my case I got this error with the first commit to a new repo.

I just deleted the .git folder and then added a few files at a time, committing with each addition.

I managed to add everything back, without running into the same error.

Comments

2

You may have many commits, and buried in your commits are files that are too big for GitHub to upload. If so, you may see this sidepacket error, rather than the usual recommendation to use Git LFS.

The solution for me was to remove the large files.

Comments

1

I beleive you send your projects via https, not ssh. Try to use

ssh://git@host:port/path/name.git 

Check if ssl verification in .gitgonfig is turned on

sslVerify = true 

If you do not have SSH keys, generate them and add to your remote. Here example for BitBucket:

  1. https://confluence.atlassian.com/bitbucketserver076/creating-ssh-keys-1026534841.html
  2. https://confluence.atlassian.com/bitbucketserver076/ssh-user-keys-for-personal-use-1026534846.html

Accept key during connection

Are you sure you want to continue connecting (yes/no/[fingerprint])? yes 

Comments

1

I tried the suggestions above, without success.

It turns out my issue was path length. I don't know if it was the number of nested directories (which are plentiful) or overall path length (path + file).

I cloned at the root of my drive and it worked (yes, on Windows 10).

UPDATE: To clarify, I had to clone to the root of my drive, using the accepted answer.

Comments

1

Try pushing from your Vscode's git GUI. It worked for me.

Comments

1

For me, a quick restart of my router solved the problem. No other configuration changes were needed, but situations may vary.

1 Comment

This is what worked for me too. Though I had great speeds prior to the reboot, apparently the connection was not ideal. Once rebooted it all worked fine.
1

Nothing of the above worked, but switching from OpenSSH to plink did work:

$env:GIT_SSH="C:\ProgramData\chocolatey\bin\PLINK.EXE"

My setup is:

  • C:\windows\System32\OpenSSH\ssh.exe (8.1.0.1)
  • C:\Program Files\Git\cmd\git.exe (2.37.3.1)
  • Microsoft Windows [Version 10.0.19044.2604]

Comments

1

I have faced the similar issue and solved by using setting the remote url as SSH one.

git remote set-url origin <SSH URL> 

My repo was cloned initially with SSH and then changed to HTTPS caused the problem for me. So, I have set again the remote origin as SSH and it starts working as usual.

Comments

1

Adding to a previous solution, what solved the issue for me was to disable jumbo frames on my network, in addition to the mentioned config value.

I have no idea if the problem was my ISP or local (or maybe even Github), but it doesn't hurt to try for whoever reads this.

Comments

0

None of these options worked for me on Windows when doing the git fetch --unshallow. To correct my instance, I went into the directory and ran git gui to bring up the UI version on Windows. Then I selected Compress Database.

This completed without error, and I was then able to do the git fetch --unshallow with and without compression.

1 Comment

This is a placebo: it doesn't solve the problem, it merely makes you wait long enough for the issue causing the problem to have finished.
0

I tried everything above and nothing worked.

So I created a new repository using Github Desktop (opened as admin) on Windows 11, and I copied all my files from the faulty repo into the new one via the normal file explorer operations. I pushed the new changes to Github (remote) still using the Github desktop app. I then opened the local repo in VScode. I am now using the new repo and everything is fine. I can now push to git successfully after making edits.

Comments

0

I got this problem today out of nowhere — the same repo was working the day before with no problems. I solved it by deleting my SSH key on the Github config panel and adding it again, as described in Github docs. I did not need to create new SSH keys.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.