In order to use the Simple Cloud Files addon, you will need an Amazon AWS account as well an Amazon S3 Bucket. For security purposes, we suggest creating a user with access to the Buckets you wish to use.

    Additionally, You will need to configure your S3 Bucket to allow our addon to communicate with it.

    Follow the steps below to setup and configure your S3 Bucket, and for creating IAM credentials.

    Setting up your S3 Bucket

    To configure the Simple Cloud Files plugin, you will need an Amazon S3 Bucket.
    You can use an existing bucket if you already have one, or follow these steps:

    Log into your Amazon AWS Account, and navigate to the S3 Management Console.

    From there, click on "Create Bucket", fill out the dialog, and click the "Next" button.

    Once your bucket is created, you will need to allow the Simple Cloud Files addon to communicate with your bucket. To do so, you need to edit the CORS Configuration of the bucket.

    Select the bucket in the list, and click on the Permissions tab to show the bucket permissions. In the permissions tab, click on the CORS Configuration button, which will bring up the "CORS Configuration editor".

    In the CORS Configuration editor, replace the existing configuration with this one:

    <?xml version="1.0" encoding="UTF-8"?>
    <CORSConfiguration xmlns="">

    The contents of the dialog should look like this:

    This accomplishes the following:
    1) it allows our addon (which lives at https://* to communicate with your bucket
        This is the pre-cursor to us being able to do anything with your bucket

    2) it allows our addon to make GET / PUT / POST / HEAD / DELETE requests to your bucket
        This is needed to retrieve files, upload new files, or delete files

    3) it provides our addon with the ETag header in responses
        This is needed for things like multi-part uploads of larger files


    'CORS' stands for Cross-Origin Resource Sharing.
    For security reasons, browsers restrict requests coming from a different domain. In this case, AWS keeps your bucket safe from others, and doesn't allow anyone (including us) to communicate with your bucket from the browser. The above settings tell AWS that it's okay for our addon to communicate with your bucket.

    Setting up S3 Credentials

    In Addition to the S3 Bucket, you will need AWS Credentials to connect with.


    For security purposes, we suggest creating an IAM User that only has permissions to this specific bucket, and nothing else.

    To create such an IAM user, log into your Amazon AWS Account, and navigate to the "Identity and Access Management" section.

    From the "Identity and Access Management" Dashboard, click on the "Users" section in the left navigation, and then on the "Create User" button.

    Fill out one of the user name fields with a desired username, and make sure the checkbox for "Generate an access key for each user" is checked. Click the "Create" button when done.

    This will create the user, and bring you to a screen to see the Security Credentials.

    A Note about Default Permissions

    Write these credentials down, or download them (blue button in the footer), as this is the only time these credentials are visible.

    Once done, return to the list of user, and click on the user record to get to the user's profile. Expand the "Permissions" section, and then expand the "Inline Policies" section, and create a new inline policy.

    When creating the policy, select "Custom Policy", and give it a name. We usually use something like "Bucket [BUCKET NAME] Acess", but anything will suffice.
    For the actual policy, paste the following, but replace YOUR_BUCKET_NAME_HERE with the name of your S3 bucket.

      "Statement": [
          "Effect": "Allow",
          "Action": "s3:*",
          "Resource": [

    The dialog should look something like this:

    Click on "Apply Policy", and you're done setting up S3, and ready to configure the plugin with your new bucket.


    Configuring the Simple Cloud Files Plugin

    To configure the addon to use your S3 Bucket, navigate to the addons section of Confluence, and then "Manage add-ons". Find the Simple Cloud Files plugin, and then click on "Configure".

    Alternatively, you can also navigate to a space, click on the "Cloud Files" link in the soace navigation on the left, which will bring up the Cloud Files section. From there you can then easily get to the bucket configuration.

    Configuring the Global Bucket

    Simple Cloud Files allows for a single (global) bucket to be shared across all Confluence spaces. Once this bucket is configured, all spaces automatically use it.

    Each space automatically receives it's own folder within the bucket, based on the space key. Within each space folder, each page receives a folder based on the page id. The resulting folder structure looks somewhat like this:

    ├── ABC/
    │    ├── spaceFiles/
    │    │    ├── Mockups/
    │    │    └── Timesheets.xls
    │    │
    │    ├── 195232/
    │    │    ├── screenshot1.png
    │    │    └── Requirements.doc
    │    │
    │    └── 143232/
    │         └── API-Spec.pdf
    └── XYZ/
         ├── spaceFiles/
         │    └── Style Guide.pdf
         ├── 13433/
         │    └── report.xls
         └── 5534/
              └── Designs.psd
    To configure the global bucket, click on the "Edit Bucket" button, and fill in the form.

    Basic Settings

    The basic settings consist of the credentials to connect to the bucket, as well as the name of the bucket itself. These are required in order to setup a bucket.

    Setting Description
    AccessKey The AccessKey is provided by Amazon when setting up the IAM credentials.
    Secret AccessKey The Secret AccessKey is provided by Amazon when setting up the IAM credentials.
    Bucket Name This is the name of an existing bucket. The above credentials need access to this bucket.

    Testing the Bucket Connection

    Once you've entered all the credentials, you can test the connection to the bucket via the "Test Connect" button. This will attempt to connect to the bucket, and ensure the credentials are valid, and have permissions to the bucket.

    If successful, you should see a shiny green success message:

    If the connection failed, you will see a red error message with some details about why the connection failed.

    The error message can sometimes be a bit vague, which is due to the fact that AWS doesn't always tell us the most descriptive errors.

    If the connection fails, and it is not due to incorrect credentials, ensure the bucket name is spelled correctly, the bucket has the proper CORS configuration, and the credentials you've provided have access to the bucket.

    Testing the CORS Configuration

    Once the credentials are setup, and you are able to connect to the bucket, you can test the CORS configuration via the "Test CORS Settings" button. This will attempt to connect to your bucket, and read the CORS config, and ensure the settings we require are present.

    If the connection fails, this is most likely due to missing settings like AllowedHeaders, or missing AllowedMethod GET. Both of those would result in us not being able to communicate with the bucket.

    Advanced Settings

    The advanced settings are options that allow you to further tweak how Simple CloudFiles interacts with your S3 Bucket.

    Setting Description
    Prefix By default, we store files in the root of the bucket (based on the structure detailed above). If you wish to store the files somewhere else in the bucket, (a subfolder perhaps), then enter the path here as a prefix.
    Timeout This settings determines how much time we give each upload/download request. The default is 10 minutes. This means a file upload for example has 10 minutes to finish before it gets aborted for taking too long. If you have to live with a slow connection, or have large files to upload, we suggest increasing the timeout to a larger number.
    Full Navigation Normally, each Page only allows navigating for files that belong to the page. The same is true for spaces. You can only see the space-level files. Enabling this setting allows navigating through the full bucket from anywhere. Thus from within a page, you could navigate upwards and see space level files.
    Multipart Upload This settings controls whether file uploads are processed as a single request, or split into multiple chunks that are uploaded separately, and then assembled back together once the upload finishes. Multipart upload is required if you want to upload large files. This is turne on by default.

    Disabling the Global Bucket

    As you may have noticed, it is possible to disable the Global bucket. To do so, simply switch the toggle button

    If the global bucket is disabled, none of the Confluence spaces will have a bucket assigned to it (obviously). This means each space will either need to supply its own bucket, or disable Cloudfiles entirely.

    Once you've filled out the form, click on "Save Settings", and you're ready to use the plugin.

    Configuring a Space Bucket

    If you don't want to have a Global bucket that's shared between all spaces, or simply would like to have a separate bucket for some of your spaces, you can do so by configuring Space Buckets.

    As you can see, each Confluence space is listed, and tells you which bucket it is currently using, if any.

    To add a bucket

    Simply click on the "Add Custom Bucket" link, which will guide you through the same steps as described above for the global bucket. The only difference being that the bucket will be directly associated with the selected space, and nowhere else.

    To remove a bucket

    Simply click on the small red X icon next to the bucket name. This will remove the bucket from the space. Note that the bucket itself remains untouched within AWS.

    To disable Cloudfiles for a Space

    To disable the Cloudfiles section for a space entirely, simply click on the toggle button on the right. In this case the CloudFiles section within pages as well as the space will be hidden, and inaccessible by users.

    If the space has a bucket associated with it, the bucket remains stored with the space, but simply won't be used / accessible.

    If you re-enable the space, the existing bucket will spring back into action.


    If a space used the Global bucket, and you then add a space bucket, none of the existing files are transferred automatically. You will have to take care of that outside of the addon.

    Similarly, if a space was using its own bucket, and you remove the bucket from the space, the space will revert back to the Global Bucket, which will not contain any of the files that were in the space specific bucket.


    Uploading files to a Space

    To upload space related files to your S3 bucket, navigate to the space and select "Cloud Files" from the space navigation bar.

    Once the plugin loads, you'll see a grid with existing files for this space (if any have been uploaded yet).
    To upload more, simply click the upload button, and select one or more files to upload.

    TIP: upload files via drag & drop

    You can also drag & drop files from your desktop diretly onto the S3 browser, and files will be uploaded automatically.

    Uploading Files to a Page

    To upload page related files to your S3 bucket, navigate to a page of your choice, and utilize the "Cloud Files" link in the actions dropdown. This will show you the Cloud Files attached to the page.

    Click the upload button, and select one or more files to upload to the page.


    Pages still retain their ability to have files attached to them directly, which are stored on Atlassian servers. This means pages can have 2 separate sets of attachments. Regular Attachments, and Cloud Files Attachments.

    Creating Folders

    Creating a folder can be done from the Cloud Files toolbar. Click on the folder icon, and a new row will appear within the file list. Enter the name of the folder to create, and hit enter, or click on the + sign.

    Once created, the folder will show up in the file list, and will be marked with a folder icon.

    To navigate to a folder, click the name, and the Cloud Files section will show the contents of that folder.

    To move up a folder, click on the "../" folder entry at the top.

    Moving Files & Folders

    The short answer:

    Due to technical limitations, this feature is not yet available, but we're trying to figure out how to best implement it.

    The slightly longer answer

    AWS S3 doesn't actually have folders. In S3, a folder is simply an empty file with a filename that ends in a slash. Files that appear in folders actually have a filename that is equivalent to the whole "folder" structure.
    So, a structure that looks like this:


    Is actually stored like this:

    Note that those are the actual filenames within the S3 bucket. So the image has the foldername and the slashes as part of the filename.

    The reason this explanation is important is because the S3 API does not have a proper "move" operation. Instead, every "move" is actually a copy + delete. So, a file is copied and once the copy is done, the original is deleted.

    Moving a single file is technically pretty straight-forward. There is an issue however where we need to manually clean up (delete) the original after the coppy is done. So, if a user tries to move a large file that takes a while, and we start the copy operation and the user then navigates away, we could end up with orphaned or duplicate files.

    When moving a "folder", this issue is compounded, since we would need to traverse the folder structure and find all the files within it, and then do a copy/delete for ALL of them.

    Deleting Files & Folders

    To delete a file, simply click on the trash icon for that file, and confirm the deletion, and the file will be deleted.

    Deleting a folder is currently not possible, for the same reasons that moving a folder is not possible, as described above.

    Renaming Files

    To rename a file, click on the pencil icon on the right, and the name of the file will change to a text field, allowing you to change the name. When done, click on the green checkmark, and the file will be renamed.

    NOTE: Renaming a folder is currently not possible, for the same reasons that moving a folder is not possible, as described above.


    Managing Files outside of Confluence

    In case you're not aware of this, Simple Cloud Files does not restrict you from using your S3 bucket in other ways. It simply connects to your bucket, and assumes that files are stored in a specific folder structure. Beyond that though, you can manage the contents of the bucket however you wish, independent of Confluence.

    This effectively means, you could connect to your S3 bucket through one of the various GUI tools, or even the command line, and manage files. You can browse the folders for each space and page, upload and download files, or move files around.
    Simple Cloud Files will simply pickup the changes the next time a page or a space is loaded.


    Simple Cloud Files expects a specific folder structure. You can move files between folders at will, but do note that the addon will always look for the space files and page files in the corresponding folders. Thus, if you move the folder for page 1234 to another space for example, Cloud Files would not be aware of it.

    On the other hand, if page 1234 has a subfolder named "Specs", and you move files into it for example, then Simple Cloud Files would pick those up, as the folder for the page itself is where the addon expects it to be.