This page is specific to S3 remote types (eg acacia and AWS) it does not apply to the more specialised banksia service. If you need more sophisticated policies and lifecycles, you can use the generated ones shown here as a starting point but will have to use awscli to add any customisations. Please refer to Acacia access and identities and Using policies for more details.
An acacia project can be added to your list of pshell remotes by using an arbitrary remote name (eg project123) and supplying the access/secret pair after you select the remote and login. After this, the usual file and folder commands will be available.
pshell:/> remote pawsey123 s3 https://projects.pawsey.org.au pshell:/> remote pawsey123 pawsey123:/>login Access: xyz Secret: *** |
Simple S3 policies can also be automatically created for you, noting that:
You can use the pshell command "info mybucket" to examine the active policies on that bucket. |
|
There are two types of bucket lifecycle that you may configure using pshell:
Here is an example of a bucket that has both versioning enabled and lifecycle rules to cleanup old (non-current) versions and failed uploads:
pshell:/>info my-bucket versioning : "Enabled" === Lifecycle === { "ID": "cleanup_multipart", "Prefix": "", "Status": "Enabled", "AbortIncompleteMultipartUpload": { "DaysAfterInitiation": 30 } { "ID": "cleanup_versions", "Prefix": "", "Status": "Enabled", "NoncurrentVersionExpiration": { "NoncurrentDays": 30 } |
It is important to always check for any error messages (or exit codes if you're scripting) in your transfers. This is particular important for acacia, as incomplete uploads do not automatically have the partially uploaded data chunks removed. You can check for these "incomplete uploads" using the pshell info command:
pshell:/>info my-bucket ... incomplete uploads : 164 |
Such failed uploads will consume your quota; so it is recommended that a lifecycle be put in place to clean them up. Adding a simple lifecycle that deletes any partial uploads that have not successfully completed 7 days after they were started could be added as follows:
pshell> lifecycle my-bucket +m 7 |
If object versioning is enabled for your bucket, then deletion of objects does not happen immediately. The following will ensure versioning is enabled and add a lifecycle rule that deletes old versions after 30 days:
pshell> lifecycle my-bucket +v 30 |
Then you will have the option to review the current state of the bucket:
pshell> lifecycle my-bucket --review Reviewing deletions: bucket=my-bucket, prefix= * folder1/my_file.txt |
and restore deleted objects in the window before the lifecycle cleanup policy permanently removes them:
pshell> lifecycle my-bucket/folder1 --restore Restoring deletions: bucket=my-bucket, prefix=folder1 restoring: folder1/my_file.txt Restored object count: 1 |
If you wish to preserve all older versions of objects, then you will need to create and apply your own custom rules using the AWS CLI: AWS documention. Note that preserving multiple versions of objects will consume your usage quota.
Finally, the pshell lifecycle commands shown above will generally overwrite any old lifecycle rules with the new rules.