Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Introduction

This page is specific to S3 remote types (eg acacia and AWS) it does not apply to the more specialised banksia service. If you need more sophisticated policies and lifecycles, you can use the generated ones shown here as a starting point but will have to use awscli to add any customisations. Please refer to Acacia access and identities and Using policies for more details.

Setup

An acacia project can be added to your list of pshell remotes by using an arbitrary remote name (eg project123) and supplying the access/secret pair after you select the remote and login. An example is given below:After this, the usual file and folder commands will be available.

Expand
titleExample...


Code Block
pshell:/> remote add project123 s3 https://projects.pawsey.org.au
pshell:/> remote project123
 
project123:/>login
Access: xyz
Secret: ***

Info

The info command on a bucket.


Policies

Simple S3 policies can also be automatically created for you, noting that:

...

Note

You can use the pshell command "info mybucket" to examine the active policies on that bucket.


Expand
titleExamples...


Panel
titleExample1 - give a list of Pawsey
usernames (user1, user2, user3, and user4) readonly access to a project bucket called p0002-sfx.

Note: if a user (eg user1) attempts to list buckets they will see nothing. However, if they attempt to list objects inside the bucket it will show the objects inside p0002-sfx/ - see Note 4.

pawsey0002
Code Block
users readonly access


Code Block
project123:/>policy 
p0002
my-
sfx
bucket +r user1,user2,user3,user4
Setting bucket=
p0002
my-
sfx
bucket, perm=+r, for user(s)='user1,user2,user3,user4' 

Note: if a user attempts to list buckets they will see nothing. However, if they attempt to list objects inside the bucket it will show the objects inside my-bucket/ - see Note 4.


Panel
titleExample 2 - revoke user3 from having read access
to the bucket.


Code Block
pawsey0002
project123:/>policy 
p0002
my-
sfx
bucket -r user3
Setting bucket=
p0002
my-
sfx
bucket, perm=-r, for user(s)='user3'



Panel
titleExample 3 -  grant read and write permission
on a bucket.


Code Block
pawsey0002
project123:/>policy 
p0002
my-
sfx
bucket +rw user1
Setting bucket=
p0002
my-
sfx
bucket, perm=+rw, for user(s)='user1'



Panel
titleExample 4 - make
the objects in p0002-sfx
a bucket readonly and publicly accessible
.


Code Block
pawsey0002
project123:/>policy 
p0002
my-
sfx
bucket +r *
Setting bucket=
p0002
my-
sfx
bucket, perm=+r, for user(s)=None



Panel
titleExample 5 - remove all policies on a bucket
.


Code Block
pawsey0002
project123:/>policy 
p0002
my-
sfx
bucket -
Deleting all policies on bucket=
p0002
my-
sfx
bucket



Lifecycles

Simple S3 bucket lifecycles can also be automatically created for you affecting multi-part uploads and versioning.

Note

Use the pshell command "info mybucket" to check if there are any current lifecycle rules as you the following may overwrite them with the following examples.


Expand
titleExamples...


Panel
titleExample 1 -
a basic bucket lifecycle that cleans up failed
enable multi-part and expired version cleanup after 30 days


Code Block
pshell> lifecycle my-bucket +mv



Panel
titleExample 2 - clean up incomplete multi-part uploads after 7 days
.


Code Block
pshell> lifecycle 
mybucket
my-bucket +m 7



Panel
titleExample
2 - a basic bucket lifecycle that turns
3 - turn on versioning and
deletes
delete expired non-current objects after 30 days
.


Code Block
pshell> lifecycle 
mybucket
my-bucket +v 30


If versioning is enabled on a bucket, then you will have the option to review and restore deleted objects in the window before the lifecycle cleanup policy permanently removes them.

Panel
titleExample 4 - Reviewing deleted objects


Code Block
pshell> lifecycle my-bucket --review
Reviewing deletions: bucket=my-bucket, prefix=
 * folder1/my_file.txt



Panel
titleExample 5 - Restoring an object


Code Block
pshell> lifecycle my-bucket/folder1 --restore
Restoring deletions: bucket=my-bucket, prefix=folder1
restoring: folder1/my_file.txt
Restored object count: 1