AAAAAA fatal: early EOF fatal: index-pack failed & Git, fatal: The remote end hung up unexpectedly
https://stackoverflow.com/questions/15240815/git-fatal-the-remote-end-hung-up-unexpectedly https://stackoverflow.com/questions/21277806/fatal-early-eof-fatal-index-pack-failed
git config --global core.compression 9 git config --global http.postBuffer 100M
fatal: early EOF fatal: index-pack failed
I have googled and found many solutions but none work for me.
I am trying to clone from one machine by connecting to the remote server which is in the LAN network.
Running this command from another machine cause error.
But running the SAME clone command using git://192.168.8.5 ... at the server it's okay and successful.
Any ideas ?
user@USER ~
$ git clone -v git://192.168.8.5/butterfly025.git
Cloning into 'butterfly025'...
remote: Counting objects: 4846, done.
remote: Compressing objects: 100% (3256/3256), done.
fatal: read error: Invalid argument, 255.05 MiB | 1.35 MiB/s
fatal: early EOF
fatal: index-pack failed
I have added this config in .gitconfig
but no help also.
Using the git version 1.8.5.2.msysgit.0
[core]
compression = -1
-
12I faced this issue for 2-3 day when i was trying to clone from VPN. in my case issue was network bandwidth. i fixed by cloning in high speed network.– Avijit NagareFeb 1, 2017 at 8:54
-
2I've also noticed it's network-related.– wonderJun 15, 2017 at 8:20
-
2I got this error because my friends not know git so well and push a lot of images into the repository! =))– Clite TailorNov 19, 2017 at 13:01
-
I've also noticed it's network-related. I also fixed by cloning in high speed network.– shashaDenovoFeb 26, 2020 at 9:16
-
7I also got the same error. I am using a fiber optic connection (40Mbps download speed). And no large files (like images/videos) in my repository too. Nevertheless still getting the same error.– Pawara SiriwardhaneMar 7, 2021 at 3:31
First, turn off compression:
git config --global core.compression 0
Next, let's do a partial clone to truncate the amount of info coming down:
git clone --depth 1 <repo_URI>
When that works, go into the new directory and retrieve the rest of the clone:
git fetch --unshallow
or, alternately,
git fetch --depth=2147483647
Now, do a regular pull:
git pull --all
I think there is a glitch with msysgit in the 1.8.x versions that exacerbates these symptoms, so another option is to try with an earlier version of git (<= 1.8.3, I think).
-
9Thank you, this worked great. I had tried changing the http.postbuffer which didn't work, but after doing as stated in this answer, it worked great. I didn't use the "git fetch --depth=2147483647" line, but I used the rest. Jun 24, 2014 at 13:55
-
7@Jose A. -- I experienced this problem when I was on a newer version of msysgit. If you are on msysgit, try an older version (<=1.8.3). Otherwise, try git fetch --depth 1000 (then 2000, etc., increasing incrementally until all the files are pulled).– ingyhereMar 19, 2015 at 15:25
-
-
3@Jose A. -- Also, have a look at this: stackoverflow.com/questions/4826639/…– ingyhereMar 19, 2015 at 15:29
-
4Hi, dear friend. Thank you for your great solution. But the last
git pull --all
not works. Because ofgit clone --depth 1
will set fetching range only one branch. So we have to edit .git/config first.– pjinczJul 9, 2016 at 16:11
-
-
7Be aware that this is not a real solution as it will set fetching to only one branch and you might end up in this situation: stackoverflow.com/questions/20338500/…– wranvaudOct 19, 2016 at 18:59
This error may occur for memory needs of git. You can add these lines to your global git configuration file, which is .gitconfig
in $USER_HOME
, in order to fix that problem.
[core]
packedGitLimit = 512m
packedGitWindowSize = 512m
[pack]
deltaCacheSize = 2047m
packSizeLimit = 2047m
windowMemory = 2047m
-
This worked for me - although I still needed several attempts, but without this change abort came at 30%, afterwards at 75%... and once it went up to 100% and worked. :)– peschüMar 15, 2017 at 6:33
-
still not working for me
remote: Enumerating objects: 43, done. remote: Counting objects: 100% (43/43), done. remote: Compressing objects: 100% (24/24), done. error: inflate returned -55/26) fatal: unpack-objects failed
Nov 7, 2019 at 9:21
-
-
4
-
This problem happened frequently for me on Windows 10 with Git 2.25.0. I found that if I did
git pull
from the remote machine repeatedly it would occasionally succeed. But what a nuisance. Then I discovered that if you rungit daemon
from within the built-in Windows Bash prompt it works 100% with no workaround needed.– StefanFeb 15, 2020 at 15:30 -
1
finally solved by git config --global core.compression 9
From a BitBucket issue thread:
I tried almost five times, and it still happen.
Then I tried to use better compression and it worked!
git config --global core.compression 9
core.compression
An integer -1..9, indicating a default compression level. -1 is the zlib default.
0 means no compression, and 1..9 are various speed/size tradeoffs, 9 being slowest.
If set, this provides a default to other compression variables, such as core.looseCompression and pack.compression.
-
5Needed to run
git repack
in combination with this solution and then it worked.– erikHOct 19, 2018 at 13:59 -
This works for me too, through VPN and corporate proxy.
--compression 0
did not work nor did all the.gitconfig
changes suggested above. Dec 9, 2019 at 22:33 -
Probably changing the config parms here (to reduce size of transferred data) would do the job, alternately.– ingyhereAug 6, 2020 at 17:52
-
As @ingyhere said:
Shallow Clone
First, turn off compression:
git config --global core.compression 0
Next, let's do a partial clone to truncate the amount of info coming down:
git clone --depth 1 <repo_URI>
When that works, go into the new directory and retrieve the rest of the clone:
git fetch --unshallow
or, alternately,
git fetch --depth=2147483647
Now, do a pull:
git pull --all
Then to solve the problem of your local branch only tracking master
open your git config file (.git/config
) in the editor of your choice
where it says:
[remote "origin"]
url=<git repo url>
fetch = +refs/heads/master:refs/remotes/origin/master
change the line
fetch = +refs/heads/master:refs/remotes/origin/master
to
fetch = +refs/heads/*:refs/remotes/origin/*
Do a git fetch and git will pull all your remote branches now
-
2
-
You could also do this:
git branch -r | awk -F'origin/' '!/HEAD|master/{print $2 " " $1"origin/"$2}' | xargs -L 1 git branch -f --track
followed bygit fetch --all --prune --tags
andgit pull --all
. It will set all remote tracking branches locally.– ingyhereAug 6, 2020 at 17:48 -
-
Changing from
fetch = +refs/heads/*:refs/remotes/origin/*
tofetch = +refs/heads/devel:refs/remotes/origin/devel
did it for me. Yes, I did the reverse and at our company we use "devel" for our main branch name Apr 28, 2022 at 13:28 -
Thanks a lot it works ! So this method creates another folder in which the repo is cloned and repaired. After that, is there a way to push the work that I commited but wasn't able to push in the initial broken folder ? Sep 26, 2022 at 9:04
I was getting the same error, on my side i resolved by running this command, In windows it has some memory issue.
git config --global pack.windowsMemory 256m
-
2This solution is the one that worked for me today. I'm on Windows 10 64-bit using git version 2.31.1.windows.1. Thanks!– PflugsMay 25, 2022 at 14:27
In my case this was quite helpful:
git clone --depth 1 --branch $BRANCH $URL
This will limit the checkout to mentioned branch only, hence will speed up the process.
Hope this will help.
I tried all of that commands and none works for me, but what works was change the git_url to http instead ssh
if is clone command do :
git clone <your_http_or_https_repo_url>
else if you are pulling on existing repo, do it with
git remote set-url origin <your_http_or_https_repo_url>
hope this help someone!
-
1This question is really about the error message in the output above when there's a problem syncing giant chunks of files from a connected repo. You're saying that cutting over to https from ssh allowed the clone to finish?– ingyhereDec 11, 2014 at 1:48
-
-
2Yes! That work for me, I have a 4gb+ repo and the only one solution I got that work was that!– elin3tDec 11, 2014 at 3:17
-
-
4It works for me, thank you! Clone by
https
and then set remote back tossh
.– TuanNov 14, 2017 at 15:47 -
2I'd really like to know why this worked. Is there something in the SSH protocol that chokes on large objects that HTTPS does not? Is this a transport layer issue? Dec 18, 2018 at 13:55
-
A long time I did above (switch to HTTPS); today I noticed that there was a middle-man-attack, and if I use VPN the SSH works just fine (without HTTPS need). Jul 27, 2022 at 18:49
I faced this problem with macOS Big Sur M1 Chip and none of the solutions worked for me.
Edit: Works as a solution for M2 Chip aswell.
I solved it by increasing ulimits below.
ulimit -f 2097152
ulimit -c 2097152
ulimit -n 2097152
Running the commands above, will be valid for only current terminal session, so first run this and then clone the repository.
-
1
I got this error when git ran out of memory.
Freeing up some memory (in this case: letting a compile job finish) and trying again worked for me.
-
For me, there wasn't much memory available, freeing some up and retrying solved it. Jan 15, 2015 at 18:23
In my case it was a connection problem. I was connected to an internal wifi network, in which I had limited access to ressources. That was letting git do the fetch but at a certain time it crashed. This means it can be a network-connection problem. Check if everything is running properly: Antivirus, Firewall, etc.
The answer of elin3t is therefore important because ssh improves the performance of the downloading so that network problems can be avoided
-
1
Setting below's config doesn't work for me.
[core]
packedGitLimit = 512m
packedGitWindowSize = 512m
[pack]
deltaCacheSize = 2047m
packSizeLimit = 2047m
windowMemory = 2047m
As previous comment, it might the memory issue from git. Thus, I try to reduce working threads(from 32 to 8). So that it won't get much data from server at the same time. Then I also add "-f " to force to sync other projects.
-f: Proceed with syncing other projects even if a project fails to sync.
Then it works fine now.
repo sync -f -j8
Note that Git 2.13.x/2.14 (Q3 2017) does raise the default core.packedGitLimit
which influences git fetch
:
The default packed-git limit value has been raised on larger platforms (from 8 GiB to 32 GiB) to save "git fetch
" from a (recoverable) failure while "gc
" is running in parallel.
See commit be4ca29 (20 Apr 2017) by David Turner (csusbdt
).
Helped-by: Jeff King (peff
).
(Merged by Junio C Hamano -- gitster
-- in commit d97141b, 16 May 2017)
Increase
core.packedGitLimit
When
core.packedGitLimit
is exceeded, git will close packs.
If there is a repack operation going on in parallel with a fetch, the fetch might open a pack, and then be forced to close it due to packedGitLimit being hit.
The repack could then delete the pack out from under the fetch, causing the fetch to fail.Increase
core.packedGitLimit
's default value to prevent this.On current 64-bit x86_64 machines, 48 bits of address space are available.
It appears that 64-bit ARM machines have no standard amount of address space (that is, it varies by manufacturer), and IA64 and POWER machines have the full 64 bits.
So 48 bits is the only limit that we can reasonably care about. We reserve a few bits of the 48-bit address space for the kernel's use (this is not strictly necessary, but it's better to be safe), and use up to the remaining 45.
No git repository will be anywhere near this large any time soon, so this should prevent the failure.
A previous answer recommends setting to 512m. I'd say there are reasons to think that's counterproductive on a 64bit architecture. The documentation for core.packedGitLimit says:
Default is 256 MiB on 32 bit platforms and 32 TiB (effectively unlimited) on 64 bit platforms. This should be reasonable for all users/operating systems, except on the largest projects. You probably do not need to adjust this value.
If you want to try it out check if you have it set and then remove the setting:
git config --show-origin core.packedGitLimit
git config --unset --global core.packedGitLimit
I had the same problem, I even tried to download the project directly from the website as a zip file but the download got interrupted at the exact same percent.
This single line fixed my problem like a charm
git config --global core.compression 0
I know other answers have mentioned this but, no one here mentioned that this line alone can fix the problem.
Hope it helps.
-
Same here, this fixed it, whereas the more complex solutions offered left me with an unusable (though probably fixable) clone.– Ron HDJan 21, 2021 at 17:17
It's confusing because Git logs may suggest any connection or ssh authorization errors, eg: ssh_dispatch_run_fatal: Connection to x.x.x.x port yy: message authentication code incorrect
, the remote end hung up unexpectedly
, early EOF
.
Server-side solution
Let's optimize git repository on the server side:
- Enter to my server's git bare repository.
- Call
git gc
. - Call
git repack -A
Eg:
ssh admin@my_server_url.com
sudo su git
cd /home/git/my_repo_name # where my server's bare repository exists.
git gc
git repack -A
Now I am able clone this repository without errors, e.g. on the client side:
git clone git@my_server_url.com:my_repo_name
The command git gc
may be called at the git client side to avoid similar git push
problem.
If you are an administrator of Gitlab service - trigger Housekeeping manually. It calls internally git gc
or git repack
.
Client-side solution
Other (hack, client-side only) solution is downloading last master without history:
git clone --single-branch --depth=1 git@my_server_url.com:my_repo_name
There is a chance that buffer overflow will not occur.
-
1
In my case nothing worked when the protocol was https, then I switched to ssh, and ensured, I pulled the repo from last commit and not entire history, and also specific branch. This helped me:
git clone --depth 1 "ssh:.git" --branch “specific_branch”
I have the same problem. Following the first step above i was able to clone, but I cannot do anything else. Can't fetch, pull or checkout old branches.
Each command runs much slower than usual, then dies after compressing the objects.
I:\dev [master +0 ~6 -0]> git fetch --unshallow
remote: Counting objects: 645483, done.
remote: Compressing objects: 100% (136865/136865), done.
error: RPC failed; result=18, HTTP code = 20082 MiB | 6.26 MiB/s
fatal: early EOF
fatal: The remote end hung up unexpectedly
fatal: index-pack failed
This also happens when your ref's are using too much memory. Pruning the memory fixed this for me. Just add a limit to what you fetching like so ->
git fetch --depth=100
This will fetch the files but with the last 100 edits in their histories. After this, you can do any command just fine and at normal speed.
-
-
1
In my case the problem was none of the git configuration parameters but the fact that my repository had one file exceeding the maximum file size allowed on my system. I was able to check it trying to download a large file and getting an "File Size Limit Exceeded" on Debian.
After that I edited my /etc/security/limits.conf
file adding et the end of it the following lines:
- hard fsize 1000000
- soft fsize 1000000
To actually "apply" the new limit values you need to re-login
I have tried for several times after I set git buffer, as I mentioned in the question, it seems work now.
So if you met this error, run this command:
git config --global http.postBuffer 2M
and then try again for some times.
Reference:
-
-
I don't know But using this command makes it possible to clone large projects Feb 18, 2022 at 9:32
-
this solved it for me. had to go to the extreme of setting it to
git config --global http.postBuffer 100M
to get a 300megs project, but it worked.– ZJRJan 7 at 4:45
Network quality matters, try to switch to a different network. What helped me was changing my Internet connection from Virgin Media high speed land-based broadband to a hotspot on my phone.
Before that I tried the accepted answer to limit clone size, tried switching between 64 and 32 bit versions, tried disabling the git file cache, none of them helped.
Then I switched to the connection via my mobile, and the first step (git clone --depth 1 <repo_URI>) succeeded. Switched back to my broadband, but the next step (git fetch --unshallow) also failed. So I deleted the code cloned so far, switched to the mobile network tried again the default way (git clone <repo_URI>) and it succeeded without any issues.
-
This is madness, but can confirm the same. Issue present with Virgin Media (500mbps), not present on BT (40mbps). Aug 31, 2021 at 9:53
For me it worked when I changed the compression to git config --global core.compression 9
This works
Tried almost all the answers here but no luck.. Finally got it worked by using the Github desktop app, https://desktop.github.com/
Macbook with M1 chip/Monterey not sure if it mattered.
Tried most of the answers here, I got the error with the PUTTY SSH Client with all possible constellations.
Once I switched to OpenSSH the error was gone (remove the Environment Variable GIT_SSH and restart the git bash).
I was using a new machine and newest git versions. On many other/older machines (AWS as well) it did work as expected with PUTTY as well without any git configuration.
None of the solutions above worked for me.
The solution that finally worked for me was switching SSH client. GIT_SSH environment variable was set to the OpenSSH provided by Windows Server 2019. Version 7.7.2.1
C:\Windows\System32\OpenSSH\ssh.exe
I simply installed putty, 0.72
choco install putty
And changed GIT_SSH to
C:\ProgramData\chocolatey\lib\putty.portable\tools\PLINK.EXE
Using @cmpickle answer, I built a script to simplify the clone process.
It is hosted here: https://gist.github.com/gianlucaparadise/10286e0b1c5409bd1049d67640fb7c03
You can run it using the following line:
curl -sL https://git.io/JvtZ5 | sh -s repo_uri repo_folder
Tangentially related and only useful in case you have no root access and manually extract Git from an RPM (with rpm2cpio) or other package (.deb, ..) into a subfolder. Typical use case: you try to use a newer version of Git over the outdated one on a corporate server.
If git clone fails with fatal: index-pack failed
without early EOF mention but instead a help message about usage: git index-pack
, there is a version mismatch and you need to run git with the --exec-path
parameter:
git --exec-path=path/to/subfoldered/git/usr/bin/git clone <repo>
In order to have this happen automatically, specify in your ~/.bashrc
:
export GIT_EXEC_PATH=path/to/subfoldered/git/usr/libexec
From a git clone, I was getting:
error: inflate: data stream error (unknown compression method)
fatal: serious inflate inconsistency
fatal: index-pack failed
After rebooting my machine, I was able to clone the repo fine.
-
The first time, I can't believe you just rebooting your machine can fix this problem, but I tried all I got messages that can't work. so I decided to reboot my machine is my last solution for me. lucky for me, when the machine starts I try to clone again. I can't believe it. That's works!!!!!!!– ThxopenJun 14, 2020 at 13:10
I turned off all the downloads I was doing in the meantime, which freed some space probably and cleared up/down bandwidth
The git-daemon issue seems to have been resolved in v2.17.0 (verified with a non working v2.16.2.1). I.e. workaround of selecting text in console to "lock output buffer" should no longer be required.
From https://github.com/git/git/blob/v2.17.0/Documentation/RelNotes/2.17.0.txt:
- Assorted fixes to "git daemon". (merge ed15e58efe jk/daemon-fixes later to maint).
I've experience the same problem. The REPO was too big to be downloaded via SSH. Just like @elin3t recommended, I've cloned over HTTP/HTTPS and change the REMOTE URL in .git/config to use the SSH REPO.