删除Amazon S3存储桶?

我一直在通过S3Fox与亚马逊S3交互,我似乎无法删除我的桶。 我select一个桶,点击删除,在popup确认删除,…没有任何反应。 有我应该使用的另一个工具吗?

最后可以使用新的生命周期(到期)规则function一次性删除所有文件。 您甚至可以从AWS控制台执行此操作。

只需在AWS控制台中右键单击存储桶名称,select“属性”,然后在页面底部的选项卡行中select“生命周期”和“添加规则”。 将“前缀”字段设置为空白(空白表示存储桶中的所有文件,或者可以将其设置为“a”以删除名称以“a”开头的所有文件)创build生命周期规则。 将“天”字段设置为“1”。 而已。 完成。 假设文件超过一天,他们应该都被删除,然后你可以删除存储桶。

我只是第一次尝试这个,所以我还在等着看文件被删除的速度有多快(这不是即时的,但可能会在24小时内发生),以及是否收到一条删除命令或5000万条删除命令命令…手指交叉!

请记住,S3存储桶在被删除之前需要清空。 好消息是大多数第三方工具可以自动化这个过程。 如果您遇到S3Fox的问题,我build议尝试使用S3FM for GUI或S3Sync作为命令行。 亚马逊有一篇很好的文章来描述如何使用S3Sync 。 设置你的variables后,键盘命令是

 ./s3cmd.rb deleteall <your bucket name> 

删除有大量单个文件的桶会导致大量的S3工具崩溃,因为它们试图显示目录中所有文件的列表。 你需要find一种批量删除的方法。 我为此find的最好的GUI工具是Bucket Explorer。 它以1000个文件块的forms删除S3桶中的文件,并且在尝试打开s3Fox和S3FM等大桶时不会崩溃。

我还发现了一些可以用于这个目的的脚本。 我还没有尝试这些脚本,但他们看起来非常简单。

ruby

 require 'aws/s3' AWS::S3::Base.establish_connection!( :access_key_id => 'your access key', :secret_access_key => 'your secret key' ) bucket = AWS::S3::Bucket.find('the bucket name') while(!bucket.empty?) begin puts "Deleting objects in bucket" bucket.objects.each do |object| object.delete puts "There are #{bucket.objects.size} objects left in the bucket" end puts "Done deleting objects" rescue SocketError puts "Had socket error" end end 

PERL

 #!/usr/bin/perl use Net::Amazon::S3; my $aws_access_key_id = 'your access key'; my $aws_secret_access_key = 'your secret access key'; my $increment = 50; # 50 at a time my $bucket_name = 'bucket_name'; my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, }); my $bucket = $s3->bucket($bucket_name); print "Incrementally deleting the contents of $bucket_name\n"; my $deleted = 1; my $total_deleted = 0; while ($deleted > 0) { print "Loading up to $increment keys...\n"; $response = $bucket->list({'max-keys' => $increment, }) or die $s3->err . ": " . $s3->errstr . "\n"; $deleted = scalar(@{ $response->{keys} }) ; $total_deleted += $deleted; print "Deleting $deleted keys($total_deleted total)...\n"; foreach my $key ( @{ $response->{keys} } ) { my $key_name = $key->{key}; $bucket->delete_key($key->{key}) or die $s3->err . ": " . $s3->errstr . "\n"; } } print "Deleting bucket...\n"; $bucket->delete_bucket or die $s3->err . ": " . $s3->errstr; print "Done.\n"; 

消息来源:Tarkblog

希望这可以帮助!

s3cmd的最新版本有 – recursion的

例如,

 ~/$ s3cmd rb --recursive s3://bucketwithfiles 

http://s3tools.org/kb/item5.htm

用s3cmd:创build一个新的空目录s3cmd sync –delete-removed empty_directory s3:// yourbucket

这可能是S3Fox中的一个bug,因为它通常能够recursion地删除项目。 但是,我不确定是否曾经尝试删除整个存储桶及其内容。

正如Stu所提到的, JetS3t项目包括一个Java GUI小程序,您可以轻松地在浏览器中运行该小程序来pipe理您的S3存储桶: Cockpit 。 与S3Fox相比,它有优点和缺点,但是它很有可能帮助你处理你的麻烦。 虽然它会要求你先删除对象,然后再删除对象。

免责声明:我是JetS3t和Cockpit的作者

SpaceBlock也使得删除S3桶变得简单 – 右击桶,删除,等待工作完成在传输视图,完成。

这是我维护的免费开源Windows s3前端,如此无耻的插件提醒等

我已经实施了bucket-destroy ,这是一个multithreading工具,它可以完成删除存储桶所需的所有工作。 我处理非空桶,以及启用版本的桶密钥。

您可以在这里阅读博客文章http://bytecoded.blogspot.com/2011/01/recursive-delete-utility-for-version.html和这里的说明http://code.google.com/p/bucket-破坏/

我已经成功删除了包含密钥名称,版本化密钥和DeleteMarker密钥中的双“//”的存储桶。 目前我正在使用一个包含〜40,000,000个桶的桶来运行它,我已经能够在m1.large上几个小时内删除1,200,000个桶了。 请注意,该实用程序是multithreading,但尚未实现洗牌(这将水平缩放,在多台机器上启动实用程序)。

如果你使用亚马逊的控制台,并一次性清除桶:您可以浏览到您的桶,然后select顶部的键,然后滚动到底部,然后按键盘上的移位,然后点击底部的一个。 它会select所有在之间,然后你可以右键单击并删除。

如果您安装了ruby (和rubygems ),请安装aws-s3 gem

 gem install aws-s3 

要么

 sudo gem install aws-s3 

创build一个文件delete_bucket.rb

 require "rubygems" # optional require "aws/s3" AWS::S3::Base.establish_connection!( :access_key_id => 'access_key_id', :secret_access_key => 'secret_access_key') AWS::S3::Bucket.delete("bucket_name", :force => true) 

并运行它:

 ruby delete_bucket.rb 

由于Bucket#delete返回超时exception很多,我已经扩大了脚本:

 require "rubygems" # optional require "aws/s3" AWS::S3::Base.establish_connection!( :access_key_id => 'access_key_id', :secret_access_key => 'secret_access_key') while AWS::S3::Bucket.find("bucket_name") begin AWS::S3::Bucket.delete("bucket_name", :force => true) rescue end end 

我想最简单的方法就是使用S3fm ,一个免费的Amazon S3在线文件pipe理器。 没有应用程序安装,没有第三方网站注册。 从Amazon S3直接运行,安全和方便。

只需select您的存储桶并点击删除。

可以用来避免这个问题的一种技术是将所有对象放在一个“文件夹”中的桶,让你只需删除文件夹,然后去删除桶。 另外, http: //s3tools.org提供的s3cmd工具可以用来删除一个包含文件的存储桶:

 s3cmd rb --force s3://bucket-name 

我一起砍了一个Python的脚本,它成功地删除了我的9000个对象。 看到这个页面:

https://efod.se/blog/archive/2009/08/09/delete-s3-bucket

另外一个无耻的插件:当我不得不删除250,000个项目时,我厌倦了等待个别的HTTP删除请求,所以我写了一个Ruby脚本,它可以执行multithreading并在很短的时间内完成:

http://github.com/sfeley/s3nuke/

这是因为线程处理的方式,在Ruby 1.9中运行得更快。

这是一个难题。 我的解决scheme是在http://stuff.mit.edu/~jik/software/delete-s3-bucket.pl.txt 。 它描述了我所确定的所有事情在顶部的评论中可能会出错。 这里是当前版本的脚本(如果我改变它,我会把一个新版本的URL,但可能不会在这里)。

 #!/usr/bin/perl # Copyright (c) 2010 Jonathan Kamens. # Released under the GNU General Public License, Version 3. # See <http://www.gnu.org/licenses/>. # $Id: delete-s3-bucket.pl,v 1.3 2010/10/17 03:21:33 jik Exp $ # Deleting an Amazon S3 bucket is hard. # # * You can't delete the bucket unless it is empty. # # * There is no API for telling Amazon to empty the bucket, so you have to # delete all of the objects one by one yourself. # # * If you've recently added a lot of large objects to the bucket, then they # may not all be visible yet on all S3 servers. This means that even after the # server you're talking to thinks all the objects are all deleted and lets you # delete the bucket, additional objects can continue to propagate around the S3 # server network. If you then recreate the bucket with the same name, those # additional objects will magically appear in it! # # It is not clear to me whether the bucket delete will eventually propagate to # all of the S3 servers and cause all the objects in the bucket to go away, but # I suspect it won't. I also suspect that you may end up continuing to be # charged for these phantom objects even though the bucket they're in is no # longer even visible in your S3 account. # # * If there's a CR, LF, or CRLF in an object name, then it's sent just that # way in the XML that gets sent from the S3 server to the client when the # client asks for a list of objects in the bucket. Unfortunately, the XML # parser on the client will probably convert it to the local line ending # character, and if it's different from the character that's actually in the # object name, you then won't be able to delete it. Ugh! This is a bug in the # S3 protocol; it should be enclosing the object names in CDATA tags or # something to protect them from being munged by the XML parser. # # Note that this bug even affects the AWS Web Console provided by Amazon! # # * If you've got a whole lot of objects and you serialize the delete process, # it'll take a long, long time to delete them all. use threads; use strict; use warnings; # Keys can have newlines in them, which screws up the communication # between the parent and child processes, so use URL encoding to deal # with that. use CGI qw(escape unescape); # Easiest place to get this functionality. use File::Basename; use Getopt::Long; use Net::Amazon::S3; my $whoami = basename $0; my $usage = "Usage: $whoami [--help] --access-key-id=id --secret-access-key=key --bucket=name [--processes=#] [--wait=#] [--nodelete] Specify --processes to indicate how many deletes to perform in parallel. You're limited by RAM (to hold the parallel threads) and bandwidth for the S3 delete requests. Specify --wait to indicate seconds to require the bucket to be verified empty. This is necessary if you create a huge number of objects and then try to delete the bucket before they've all propagated to all the S3 servers (I've seen a huge backlog of newly created objects take *hours* to propagate everywhere). See the comment at the top of the script for more information about this issue. Specify --nodelete to empty the bucket without actually deleting it.\n"; my($aws_access_key_id, $aws_secret_access_key, $bucket_name, $wait); my $procs = 1; my $delete = 1; die if (! GetOptions( "help" => sub { print $usage; exit; }, "access-key-id=s" => \$aws_access_key_id, "secret-access-key=s" => \$aws_secret_access_key, "bucket=s" => \$bucket_name, "processess=i" => \$procs, "wait=i" => \$wait, "delete!" => \$delete, )); die if (! ($aws_access_key_id && $aws_secret_access_key && $bucket_name)); my $increment = 0; print "Incrementally deleting the contents of $bucket_name\n"; $| = 1; my(@procs, $current); for (1..$procs) { my($read_from_parent, $write_to_child); my($read_from_child, $write_to_parent); pipe($read_from_parent, $write_to_child) or die; pipe($read_from_child, $write_to_parent) or die; threads->create(sub { close($read_from_child); close($write_to_child); my $old_select = select $write_to_parent; $| = 1; select $old_select; &child($read_from_parent, $write_to_parent); }) or die; close($read_from_parent); close($write_to_parent); my $old_select = select $write_to_child; $| = 1; select $old_select; push(@procs, [$read_from_child, $write_to_child]); } my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, }); my $bucket = $s3->bucket($bucket_name); my $deleted = 1; my $total_deleted = 0; my $last_start = time; my($start, $waited); while ($deleted > 0) { $start = time; print "\nLoading ", ($increment ? "up to $increment" : "as many as possible")," keys...\n"; my $response = $bucket->list({$increment ? ('max-keys' => $increment) : ()}) or die $s3->err . ": " . $s3->errstr . "\n"; $deleted = scalar(@{ $response->{keys} }) ; if (! $deleted) { if ($wait and ! $waited) { my $delta = $wait - ($start - $last_start); if ($delta > 0) { print "Waiting $delta second(s) to confirm bucket is empty\n"; sleep($delta); $waited = 1; $deleted = 1; next; } else { last; } } else { last; } } else { $waited = undef; } $total_deleted += $deleted; print "\nDeleting $deleted keys($total_deleted total)...\n"; $current = 0; foreach my $key ( @{ $response->{keys} } ) { my $key_name = $key->{key}; while (! &send(escape($key_name) . "\n")) { print "Thread $current died\n"; die "No threads left\n" if (@procs == 1); if ($current == @procs-1) { pop @procs; $current = 0; } else { $procs[$current] = pop @procs; } } $current = ($current + 1) % @procs; threads->yield(); } print "Sending sync message\n"; for ($current = 0; $current < @procs; $current++) { if (! &send("\n")) { print "Thread $current died sending sync\n"; if ($current = @procs-1) { pop @procs; last; } $procs[$current] = pop @procs; $current--; } threads->yield(); } print "Reading sync response\n"; for ($current = 0; $current < @procs; $current++) { if (! &receive()) { print "Thread $current died reading sync\n"; if ($current = @procs-1) { pop @procs; last; } $procs[$current] = pop @procs; $current--; } threads->yield(); } } continue { $last_start = $start; } if ($delete) { print "Deleting bucket...\n"; $bucket->delete_bucket or die $s3->err . ": " . $s3->errstr; print "Done.\n"; } sub send { my($str) = @_; my $fh = $procs[$current]->[1]; print($fh $str); } sub receive { my $fh = $procs[$current]->[0]; scalar <$fh>; } sub child { my($read, $write) = @_; threads->detach(); my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, }); my $bucket = $s3->bucket($bucket_name); while (my $key = <$read>) { if ($key eq "\n") { print($write "\n") or die; next; } chomp $key; $key = unescape($key); if ($key =~ /[\r\n]/) { my(@parts) = split(/\r\n|\r|\n/, $key, -1); my(@guesses) = shift @parts; foreach my $part (@parts) { @guesses = (map(($_ . "\r\n" . $part, $_ . "\r" . $part, $_ . "\n" . $part), @guesses)); } foreach my $guess (@guesses) { if ($bucket->get_key($guess)) { $key = $guess; last; } } } $bucket->delete_key($key) or die $s3->err . ": " . $s3->errstr . "\n"; print "."; threads->yield(); } return; } 

我是桶资源pipe理器团队的开发团队成员之一,我们将根据用户的select提供不同的选项来删除桶… 1)快速删除 – 此选项将删除您桶中的数据大小为1000. 2)永久删除 – 此选项将删除队列中的对象。

如何删除Amazon S3文件和存储桶?

亚马逊最近添加了一个新function,“多对象删除”,允许多达1000个对象一次删除一个API请求。 这应该允许简化从桶中删除大量文件的过程。

新function的文档可以在这里find: http : //docs.amazonwebservices.com/AmazonS3/latest/dev/DeletingMultipleObjects.html

我总是使用C#API和小脚本来完成这个任务。 我不确定为什么S3Fox不能做到这一点,但目前这个function似乎已经被破坏了。 不过,我相信很多其他的S3工具也可以做到这一点。

首先删除存储桶中的所有对象。 那么你可以删除存储桶本身。

显然,不能删除一个桶中的对象,S3Fox不会为你做这个。

我自己还有其他一些S3Fox的小问题,就像这样,现在使用一个基于Java的工具jets3t ,这个工具更接近错误条件。 还必须有其他人。

您必须确保您为该存储桶设置了正确的写入权限,并且该存储桶不包含任何对象。 一些有用的工具,可以帮助您删除: CrossFTP ,查看和删除像FTP客户端的桶。 jets3t工具如上所述。

我将不得不看看这些替代文件pipe理器。 我已经使用(并且喜欢)BucketExplorer,你可以从 – 令人惊讶的 – http://www.bucketexplorer.com/

这是一个30天的免费试用,然后(目前)每个许可证49.99美元(购买封面上49.95美元)。

这是我用的。 只是简单的ruby代码。

 case bucket.size when 0 puts "Nothing left to delete" when 1..1000 bucket.objects.each do |item| item.delete puts "Deleting - #{bucket.size} left" end end 

使用亚马逊网页pipe理控制台。 用Google Chrome来加速。 删除的对象比Firefox快很多(约10倍)。 有60 000个对象删除。

Interesting Posts