[Errno 32] Broken pipe in s3cmd

Broken Pipe in s3cmdI’ve been looking for a fast and easy way to deal with Amazon S3 from a Unix command line prompt, and came across s3cmd which seemed great, until it didn’t work.

After running s3cmd --configure to configure it and then running a test successfully, I tried to upload a file using  s3cmd put filename s3://bucket/filename and got an error, specifically [Errno 32] Broken pipe.

The program would continually throttle the upload speed and retry until it couldn’t get the file up at any speed, no matter how slow.

Much looking around on the ‘net didn’t turn up a helpful answer, but I did manage to figure out what was causing the problem.

If the bucket you’re uploading to doesn’t exist (or you miss typed it :| ) it’ll fail with that error. Thank you generic error message.

To fix this, check your bucket name and retry with it spelled correctly. :)

This entry was posted in Answers, Tech and tagged , , , , . Bookmark the permalink.

25 Responses to [Errno 32] Broken pipe in s3cmd

  1. Shaibn says:

    For me, it was not the case. The bucket was created in EU (Ireland) and all I needed to do was to wait a few (couple) of minutes before attempting the copy. Attempting it promptly after creation, resulted in the Broken Pipe error.

    • Jeremy Shapiro says:

      Same concept, though. The bucket didn’t exist in either case. For me, I spelled it wrong. For you, it just wasn’t provisioned yet. Either way, the error message is a little vague… :)

  2. Caro Davy says:

    If only it were that simple! :-) I experience this behaviour for buckets which DO exist. My connection just keeps getting cut. Lowest upload speed I’ve seen so far is 164B (yes, that’s BYTES!) per second. I’ve found where in the S3.py code to change the throttle speed, my problem is now getting it to throttle back by a reasonable amount on each retry. I’m not a python coder and I’m getting nowhere with this – an upload that starts at 340K/s restarts at 181K/s then is cut to 30K – there’s some algorithm being applied that I just can’t fathom. If anyone could help it would be great. Also, can the wait time on connection failure be adjusted? The timeout period seems way too long. Or is this controlled by the Amazon end?
    Caro

    • Jeremy Shapiro says:

      It appears that the script is what decreases the speed back upon failure and the retry period before trying again – not amazon. If the file is transferring okay, however, it lets it go and doesn’t retry at a faster speed. Are you transferring from a remotely hosted server or from your own personal computer?

  3. Anton Zeef says:

    I’ve been having ‘([Errno 32] Broken pipe)’-problems uploading a large file (>26G) to S3. Apparently, the size of a file can be at most 5G, but s3cmd doesn’t convey the error message to the user. I used s3cp, and this program did gave me the exact s3-error message, so that’s how I found out.

    I’ve split my files into chunks of 5G and now all is working fine.

    So bottom line of your post can be: either your bucket doesn’t exist (yet) or the file you’re trying to upload is bigger than 5G.

    • Jeremy Shapiro says:

      Good call and thanks for the tip, Anton! The helps me revise the bottom line for developers to be “give the user better error codes and messages!” :)

  4. andrija says:

    My problem was that I set date 1 month ago, and it resulted with this error… (I was testing something) :)

  5. Yuri de Wit says:

    Make sure you don’t have trailing spaces after the access or private keys in your ~/.s3cfg file. It fixed the issues I was having with Broken pipe and Signatures.

  6. Alex says:

    I had the same problem with an IAM user account. It turned out user is lacking s3:PutObjectAcl permission. General error messages make a tool of very little use.

    • Jeremy Shapiro says:

      Agreed, Alex. I, too, would like to see slightly better error messages! And why, if for any of the reasons that this error actually comes up, would it try again and again at slower speeds to upload? I suppose it’s at least great that a tool exists to upload to S3. :)

    • caesarsol says:

      thanks alex, you saved my day!
      I was struggling with a different configuration (django, django-storages and boto – s3boto) but the problem was this…
      i’ll try to post it as a boto issue.

  7. Ashok Kumar says:

    Hi,

    I am also same issue. when i am trying to upload 5GB file, its showing broken pipe error.
    Please let me how can fix it on S3cmd

    Is it possible or not.

    Ashok

  8. J says:

    I am also same issue. when i am trying to upload 52MB file.

    99 retries left, sleeping for 30 seconds
    Broken pipe: Broken pipe

  9. Mario Medina says:

    Hi! I tested this, and the problem was that the file is more than 5GB. I splitted the file in 5GB blocks and all work OK right now.

  10. I ran into this problem. Turns out I hit the 5 gig limit as well.
    Thanks.

  11. Serhat Artun says:

    I had same problem and mine was not about sizes or wrong naming, if you are using IAM user`s credentials and its policy does not have access to call put command, it happens as well . Just as a note if there is no access to bucket at all , s3cmd will give auth error but you have some rights on bucket then it acts like that.

  12. Brandon Yap says:

    I had this problem too and it turned out to time synching. Make sure the clock on your client system is correct.

  13. George Cooke says:

    I had same error it was because of the file size >5GB but YOU CAN upload files larger than 5Gb to Amazon S3; the max object size is 5TB (http://aws.amazon.com/s3/faqs/#How_much_data_can_I_store) but limited to 5GB per ‘put’ (sounds crazy but read on). The soloution in Amazon’s FAQ is to use the Mutlipart API ().

    – It splits the file into parts, uploads them then re-assembles them the other end.

    So, which tools can you use, I found s3cmd to be the best and
    s3cmd 1.1.0-beta2 supports Multipart (only for ‘put’, not for ‘sync’) by default (http://s3tools.org/s3cmd-110b2-released).

    I am FINALLY uploading my 13GB data file to s3 without having to mess with it.

    To install the s3cmd 1.1.0-beta2 (instead of older, non multipart ones from distro or even s3cmd repos):

    1. Download S3cmd (http://s3tools.org/download) and extract it
    2. Run “sudo python setup.py install” from command.
    3. Finally, run “s3cmd –configure” from command to configure S3cmd

    .. go and ‘put’ your big files.

  14. Richard Goldman says:

    Little extra note to the above extremely helpful comment… slightly newer version…

    http://sourceforge.net/projects/s3tools/files/s3cmd/1.1.0-beta3/s3cmd-1.1.0-beta3.zip/download

    …and contains the instruction (found in the INSTALL file) that “python setup.py install” is a good way to… install it. For me, it cleanly overwrote the prior version.

    I then used it to upload an 8.8GB file to S3, then pulled it back down, and its md5 sum matched the original (yet another good thing!).

  15. SPM says:

    I tried George Cooke’s trick. It works fine.

    Thank you so much George :)

  16. samwize says:

    If you setup the security group policy to restrict to certain resources, you might have set it up the wrong way, and result in the same broken pipe error.

    I blogged about how to set the policy correctly here: http://samwize.com/2013/04/21/s3cmd-broken-pipe-error-errno-32/

  17. Denis says:

    I had this issue with very small file, about 150kb, and fixed by changing ‘use_https’ to ‘True’

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>