[Errno 32] Broken pipe in s3cmd

Broken Pipe in s3cmdI’ve been looking for a fast and easy way to deal with Amazon S3 from a Unix command line prompt, and came across s3cmd which seemed great, until it didn’t work.

After running s3cmd --configure to configure it and then running a test successfully, I tried to upload a file usingย  s3cmd put filename s3://bucket/filename and got an error, specifically [Errno 32] Broken pipe.

The program would continually throttle the upload speed and retry until it couldn’t get the file up at any speed, no matter how slow.

Much looking around on the ‘net didn’t turn up a helpful answer, but I did manage to figure out what was causing the problem.

If the bucket you’re uploading to doesn’t exist (or you miss typed it ๐Ÿ˜ ) it’ll fail with that error. Thank you generic error message.

To fix this, check your bucket name and retry with it spelled correctly. ๐Ÿ™‚

30 thoughts on “[Errno 32] Broken pipe in s3cmd”

  1. For me, it was not the case. The bucket was created in EU (Ireland) and all I needed to do was to wait a few (couple) of minutes before attempting the copy. Attempting it promptly after creation, resulted in the Broken Pipe error.

    1. Same concept, though. The bucket didn’t exist in either case. For me, I spelled it wrong. For you, it just wasn’t provisioned yet. Either way, the error message is a little vague… ๐Ÿ™‚

  2. If only it were that simple! ๐Ÿ™‚ I experience this behaviour for buckets which DO exist. My connection just keeps getting cut. Lowest upload speed I’ve seen so far is 164B (yes, that’s BYTES!) per second. I’ve found where in the S3.py code to change the throttle speed, my problem is now getting it to throttle back by a reasonable amount on each retry. I’m not a python coder and I’m getting nowhere with this – an upload that starts at 340K/s restarts at 181K/s then is cut to 30K – there’s some algorithm being applied that I just can’t fathom. If anyone could help it would be great. Also, can the wait time on connection failure be adjusted? The timeout period seems way too long. Or is this controlled by the Amazon end?
    Caro

    1. It appears that the script is what decreases the speed back upon failure and the retry period before trying again – not amazon. If the file is transferring okay, however, it lets it go and doesn’t retry at a faster speed. Are you transferring from a remotely hosted server or from your own personal computer?

  3. I’ve been having ‘([Errno 32] Broken pipe)’-problems uploading a large file (>26G) to S3. Apparently, the size of a file can be at most 5G, but s3cmd doesn’t convey the error message to the user. I used s3cp, and this program did gave me the exact s3-error message, so that’s how I found out.

    I’ve split my files into chunks of 5G and now all is working fine.

    So bottom line of your post can be: either your bucket doesn’t exist (yet) or the file you’re trying to upload is bigger than 5G.

    1. Good call and thanks for the tip, Anton! The helps me revise the bottom line for developers to be “give the user better error codes and messages!” ๐Ÿ™‚

  4. My problem was that I set date 1 month ago, and it resulted with this error… (I was testing something) ๐Ÿ™‚

  5. Make sure you don’t have trailing spaces after the access or private keys in your ~/.s3cfg file. It fixed the issues I was having with Broken pipe and Signatures.

  6. I had the same problem with an IAM user account. It turned out user is lacking s3:PutObjectAcl permission. General error messages make a tool of very little use.

    1. Agreed, Alex. I, too, would like to see slightly better error messages! And why, if for any of the reasons that this error actually comes up, would it try again and again at slower speeds to upload? I suppose it’s at least great that a tool exists to upload to S3. ๐Ÿ™‚

    2. thanks alex, you saved my day!
      I was struggling with a different configuration (django, django-storages and boto – s3boto) but the problem was this…
      i’ll try to post it as a boto issue.

  7. Hi,

    I am also same issue. when i am trying to upload 5GB file, its showing broken pipe error.
    Please let me how can fix it on S3cmd

    Is it possible or not.

    Ashok

  8. I had same problem and mine was not about sizes or wrong naming, if you are using IAM user`s credentials and its policy does not have access to call put command, it happens as well . Just as a note if there is no access to bucket at all , s3cmd will give auth error but you have some rights on bucket then it acts like that.

  9. I had same error it was because of the file size >5GB but YOU CAN upload files larger than 5Gb to Amazon S3; the max object size is 5TB (http://aws.amazon.com/s3/faqs/#How_much_data_can_I_store) but limited to 5GB per ‘put’ (sounds crazy but read on). The soloution in Amazon’s FAQ is to use the Mutlipart API ().

    – It splits the file into parts, uploads them then re-assembles them the other end.

    So, which tools can you use, I found s3cmd to be the best and
    s3cmd 1.1.0-beta2 supports Multipart (only for ‘put’, not for ‘sync’) by default (http://s3tools.org/s3cmd-110b2-released).

    I am FINALLY uploading my 13GB data file to s3 without having to mess with it.

    To install the s3cmd 1.1.0-beta2 (instead of older, non multipart ones from distro or even s3cmd repos):

    1. Download S3cmd (http://s3tools.org/download) and extract it
    2. Run โ€œsudo python setup.py installโ€ from command.
    3. Finally, run โ€œs3cmd โ€“configureโ€ from command to configure S3cmd

    .. go and ‘put’ your big files.

  10. Little extra note to the above extremely helpful comment… slightly newer version…

    http://sourceforge.net/projects/s3tools/files/s3cmd/1.1.0-beta3/s3cmd-1.1.0-beta3.zip/download

    …and contains the instruction (found in the INSTALL file) that “python setup.py install” is a good way to… install it. For me, it cleanly overwrote the prior version.

    I then used it to upload an 8.8GB file to S3, then pulled it back down, and its md5 sum matched the original (yet another good thing!).

  11. What I found was small files were uploading after a redirect to my bucket(hosted in Singapore) and large files were giving this (not so helpful) error message.

    Fixed it by updating the cfg file with:
    host_base = s3-ap-southeast-1.amazonaws.com
    host_bucket = %(bucket)s.s3-ap-southeast-1.amazonaws.com

Leave a Reply

Your email address will not be published. Required fields are marked *