/
pshell and S3 remotes

pshell and S3 remotes

Introduction

This page is specific to S3 remote types (eg acacia and AWS) it does not apply to the more specialised banksia service. If you need more sophisticated policies and lifecycles, you can use the generated ones shown here as a starting point but will have to use awscli to add any customisations. Please refer to Acacia access and identities and Using policies for more details.

Setup

An acacia project can be added to your list of pshell remotes by using an arbitrary remote name (eg project123) and supplying the access/secret pair after you select the remote and login. After this, the usual file and folder commands will be available.

pshell:/> remote pawsey123 s3 https://projects.pawsey.org.au
pshell:/> remote pawsey123

pawsey123:/>login
Access: xyz
Secret: ***

Bucket policies

Simple S3 policies can also be automatically created for you, noting that:

  1. Policies are attached to buckets and are a list of statements about actions allowed or denied for that bucket only.
  2. Policies override the default project permissions so care should be taken not to lock yourself out of the bucket. However, you can still use pshell to remove all policies and regain access.
  3. Any DENY in a policy statement counts as a negative permission overall for that action, even if there is also an ALLOW elsewhere.
  4. Policies only grant visibility of objects in a bucket, not visibility of the bucket itself.

You can use the pshell command "info mybucket" to examine the active policies on that bucket.

 Click to see examples...
Example1 - give a list of Pawsey users readonly access
pawsey123:/>policy my-bucket +r user1,user2,user3,user4
Setting bucket=my-bucket, perm=+r, for user(s)='user1,user2,user3,user4' 

Note: if a user attempts to list buckets they will see nothing. However, if they attempt to list objects inside the bucket it will show the objects inside my-bucket/ - see Note 4.


Example 2 - revoke user3 from having read access
pawsey123:/>policy my-bucket -r user3
Setting bucket=my-bucket, perm=-r, for user(s)='user3'


Example 3 -  grant read and write permission
pawsey123:/>policy my-bucket +rw user1
Setting bucket=my-bucket, perm=+rw, for user(s)='user1'
Example 4 - make a bucket readonly and publicly accessible
pawsey123:/>policy my-bucket +r *
Setting bucket=my-bucket, perm=+r, for user(s)=None
Example 5 - remove all policies on a bucket
pawsey123:/>policy my-bucket -
Deleting all policies on bucket=my-bucket

Bucket lifecycles

There are two types of bucket lifecycle that you may configure using pshell:

  • failed uploads - where some data chunks were not received,
  • object versioning - which can be used to restore deleted objects.

Here is an example of a bucket that has both versioning enabled and lifecycle rules to cleanup old (non-current) versions and failed uploads:

pshell:/>info my-bucket

versioning : "Enabled"

=== Lifecycle === 
{
"ID": "cleanup_multipart",
"Prefix": "",
"Status": "Enabled",
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 30
}
{
"ID": "cleanup_versions",
"Prefix": "",
"Status": "Enabled",
"NoncurrentVersionExpiration": {
"NoncurrentDays": 30
}

Cleaning up failed uploads

It is important to always check for any error messages (or exit codes if you're scripting) in your transfers. This is particular important for acacia, as incomplete uploads do not automatically have the partially uploaded data chunks removed. You can check for these "incomplete uploads" using the pshell info command:

pshell:/>info my-bucket
...
incomplete uploads : 164

Such failed uploads will consume your quota; so it is recommended that a lifecycle be put in place to clean them up. Adding a simple lifecycle that deletes any partial uploads that have not successfully completed 7 days after they were started could be added as follows: 

pshell> lifecycle my-bucket +m 7

Object versioning

If object versioning is enabled for your bucket, then deletion of objects does not happen immediately. The following will ensure versioning is enabled and add a lifecycle rule that deletes old versions after 30 days:

pshell> lifecycle my-bucket +v 30

Then you will have the option to review the current state of the bucket:

pshell> lifecycle my-bucket --review
Reviewing deletions: bucket=my-bucket, prefix=
 * folder1/my_file.txt

and restore deleted objects in the window before the lifecycle cleanup policy permanently removes them:

pshell> lifecycle my-bucket/folder1 --restore
Restoring deletions: bucket=my-bucket, prefix=folder1
restoring: folder1/my_file.txt
Restored object count: 1

If you wish to preserve all older versions of objects, then you will need to create and apply your own custom rules using the AWS CLI: AWS documention. Note that preserving multiple versions of objects will consume your usage quota.

Finally, the pshell lifecycle commands shown above will generally overwrite any old lifecycle rules with the new rules.

Related content