PREREQUISITES

    In order to use the Simple Cloud Files addon, you will need an Amazon AWS account as well an Amazon S3 Bucket. For security purposes, we suggest creating a user with access to the Buckets you wish to use.

    Additionally, You will need to configure your S3 Bucket to allow our addon to communicate with it.

    Follow the steps below to setup and configure your S3 Bucket, and for creating IAM credentials.

    Setting up your S3 Bucket

    To configure the Simple Cloud Files plugin, you will need an Amazon S3 Bucket.
    You can use an existing bucket if you already have one, or follow these steps:

    Log into your Amazon AWS Account, and navigate to the S3 Management Console.



    From there, click on "Create Bucket", fill out the dialog, and click the "Next" button.





    Once your bucket is created, you will need to allow the Simple Cloud Files addon to communicate with your bucket. To do so, you need to edit the CORS Configuration of the bucket.

    Select the bucket in the list, and click on the Permissions tab to show the bucket permissions. In the permissions tab, click on the CORS Configuration button, which will bring up the "CORS Configuration editor".



    In the CORS Configuration editor, replace the existing configuration with this one:

    <?xml version="1.0" encoding="UTF-8"?>
    <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
       <CORSRule>
         <AllowedOrigin>https://*.tss.io</AllowedOrigin>
         <AllowedMethod>GET</AllowedMethod>
         <AllowedMethod>PUT</AllowedMethod>
         <AllowedMethod>POST</AllowedMethod>
         <AllowedMethod>HEAD</AllowedMethod>
         <AllowedMethod>DELETE</AllowedMethod>
         <AllowedHeader>*</AllowedHeader>
         <ExposeHeader>ETag</ExposeHeader>
       </CORSRule>
    </CORSConfiguration>


    The contents of the dialog should look like this:


    This accomplishes the following:
    1) it allows our addon (which lives at https://*.tss.io) to communicate with your bucket
        This is the pre-cursor to us being able to do anything with your bucket

    2) it allows our addon to make GET / PUT / POST / HEAD / DELETE requests to your bucket
        This is needed to retrieve files, upload new files, or delete files

    3) it provides our addon with the ETag header in responses
        This is needed for things like multi-part uploads of larger files

    WHAT IS 'CORS' AND WHY DO I NEED IT?

    'CORS' stands for Cross-Origin Resource Sharing.
    For security reasons, browsers restrict requests coming from a different domain. In this case, AWS keeps your bucket safe from others, and doesn't allow anyone (including us) to communicate with your bucket from the browser. The above settings tell AWS that it's okay for our addon to communicate with your bucket.

    Setting up S3 Credentials

    In Addition to the S3 Bucket, you will need AWS Credentials to connect with.

    NOTE

    For security purposes, we suggest creating an IAM User that only has permissions to this specific bucket, and nothing else.



    To create such an IAM user, log into your Amazon AWS Account, and navigate to the "Identity and Access Management" section.



    From the "Identity and Access Management" Dashboard, click on the "Users" section in the left navigation, and then on the "Create User" button.





    Fill out one of the user name fields with a desired username, and make sure the checkbox for "Generate an access key for each user" is checked. Click the "Create" button when done.



    This will create the user, and bring you to a screen to see the Security Credentials.



    A Note about Default Permissions

    Write these credentials down, or download them (blue button in the footer), as this is the only time these credentials are visible.



    Once done, return to the list of user, and click on the user record to get to the user's profile. Expand the "Permissions" section, and then expand the "Inline Policies" section, and create a new inline policy.



    When creating the policy, select "Custom Policy", and give it a name. We usually use something like "Bucket [BUCKET NAME] Acess", but anything will suffice.
    For the actual policy, paste the following, but replace YOUR_BUCKET_NAME_HERE with the name of your S3 bucket.

    {
      "Statement": [
        {
          "Effect": "Allow",
          "Action": "s3:*",
          "Resource": [
            "arn:aws:s3:::YOUR_BUCKET_NAME_HERE",
            "arn:aws:s3:::YOUR_BUCKET_NAME_HERE/*"
          ]
        }
      ]
    }


    The dialog should look something like this:

    Click on "Apply Policy", and you're done setting up S3, and ready to configure the plugin with your new bucket.

    SETUP & CONFIGURATION

    Configuring the Simple Cloud Files Plugin

    To configure the addon to use your S3 Bucket, navigate to the addons section of Confluence, and then "Manage add-ons". Find the Simple Cloud Files plugin, and then click on "Configure".



    Alternatively, you can also navigate to a space, click on the "Cloud Files" link in the space navigation on the left, which will bring up the Cloud Files section. From there you can then easily get to the bucket configuration.

    Configuring the Global Bucket

    Simple Cloud Files allows for a single (global) bucket to be shared across all Confluence spaces. Once this bucket is configured, all spaces automatically use it.

    Each space automatically receives it's own folder within the bucket, based on the space key. Within each space folder, each page receives a folder based on the page id. The resulting folder structure looks somewhat like this:

    root/
    ├── ABC/
    │    ├── spaceFiles/
    │    │    ├── Mockups/
    │    │    └── Timesheets.xls
    │    │
    │    ├── 195232/
    │    │    ├── screenshot1.png
    │    │    └── Requirements.doc
    │    │
    │    └── 143232/
    │         └── API-Spec.pdf
    │
    └── XYZ/
         ├── spaceFiles/
         │    └── Style Guide.pdf
         │	 
         ├── 13433/
         │    └── report.xls
         │	 
         └── 5534/
              └── Designs.psd
    
    To configure the global bucket, click on the "Edit Bucket" button, and fill in the form.

    Basic Settings

    The basic settings consist of the credentials to connect to the bucket, as well as the name of the bucket itself. These are required in order to setup a bucket.

    Setting Description
    AccessKey The AccessKey is provided by Amazon when setting up the IAM credentials.
    Secret AccessKey The Secret AccessKey is provided by Amazon when setting up the IAM credentials.
    Bucket Name This is the name of an existing bucket. The above credentials need access to this bucket.
    Bucket Region This is the region the bucket is in. This field is automatically filled by us when you test the connection.


    Testing the Bucket Connection

    Once you've entered all the credentials, you can test the connection to the bucket via the "Test Connection" button. This will attempt to connect to the bucket, and ensure the credentials are valid, and have permissions to the bucket.

    Additionally, the addon automatically checks whether the CORS configuration for the bucket is correct and contains all the bits and pieces the addon needs to function.

    If successful, you should see a shiny green success message:

    If the bucket doesn't exist, the connection failed, or the CORS configuration is missing anything, you will see a red error message with some details about why the connection failed.

    The error message are typically pretty straightforward, and are a result of either the credentials being incorrect, or the bucket not being accessible.
    Every now and then, errors can be a bit vague, which is due to the fact that AWS doesn't always tell us the most descriptive errors.

    If that happens, ensure the credentials are correct, IAM permissions are correct, and that the bucket exists. If that doesn't do the trick, reach out to us, and we will gladly help you get setup.


    Testing the CORS Configuration

    If the reason you're seeing an error when testing the connection is due to the CORS configuration, we provide a handy "Fix CORS" button right in the error message. If you click that, the addon will automatically edit the CORS configuration of the bucket to fill in the missing pieces.

    QUESTION: Won't you clobber my existing CORS config?

    The simple answer is No.

    We specifically look through all entries to find any that have an AllowedOrigin that is specific to our addon. If we find an entry that matches, we edit that entry. If we can't find a CORS entry that relates to our addon, we simply add a new one entry, and leave everything else untouched.

    To avoid any conflicts or other issues, we never edit any other CORS entries.

    Advanced Settings

    The advanced settings are options that allow you to further tweak how Simple CloudFiles interacts with your S3 Bucket.

    Setting Description
    Prefix By default, we store files in the root of the bucket (based on the structure detailed above). If you wish to store the files somewhere else in the bucket, (a subfolder perhaps), then enter the path here as a prefix.
    Timeout This settings determines how much time we give each upload/download request. The default is 10 minutes. This means a file upload for example has 10 minutes to finish before it gets aborted for taking too long. If you have to live with a slow connection, or have large files to upload, we suggest increasing the timeout to a larger number.
    Full Navigation Normally, each Page only allows navigating for files that belong to the page. The same is true for spaces. You can only see the space-level files. Enabling this setting allows navigating through the full bucket from anywhere. Thus from within a page, you could navigate upwards and see space level files.
    Multipart Upload This settings controls whether file uploads are processed as a single request, or split into multiple chunks that are uploaded separately, and then assembled back together once the upload finishes. Multipart upload is required if you want to upload large files. This is turne on by default.

    Permissions

    The Permissions settings allow you to define which user group is allowed to interact with the Cloud Files addon in a particular way.

    By default, everyone with access to a Space or a Page is allowed to perform all actions. This includes uploading files, creating folders, renaming files & folders, moving them, as well as deleting files & folders.

    Once you change the settings to require specific permissions, any user will be able to browse through the Cloud Files section of a page or space, but only users belonging to the specific groups, will be able to perform the actions you grant them.



    Permission Description
    Upload Users with this permission are allowed to upload files.
    Folder Create Users with this permission are allowed to create folders.
    Delete Users with this permission are allowed to delete files & folders.
    Move Users with this permission are allowed to move files & folders.
    Rename Users with this permission are allowed to rename files & folders.


    To remove an entry, either click the black X at the end of a row, or revoke all the permissions for an entry (turning them red), and the next time you save the settings, any removed rows and any rows with all denied permissions are automatically removed.

    Disabling the Global Bucket

    As you may have noticed, it is possible to disable the Global bucket. To do so, simply switch the toggle button

    If the global bucket is disabled, none of the Confluence spaces will have a bucket assigned to it (obviously). This means each space will either need to supply its own bucket, or disable Cloudfiles entirely.

    Once you've filled out the form, click on "Save Settings", and you're ready to use the plugin.

    Configuring a Space Bucket

    If you don't want to have a Global bucket that's shared between all spaces, or simply would like to have a separate bucket for some of your spaces, you can do so by configuring Space Buckets.

    As you can see, each Confluence space is listed, and tells you which bucket it is currently using, if any.

    To add a bucket

    Simply click on the "Add Custom Bucket" link, which will guide you through the same steps as described above for the global bucket. The only difference being that the bucket will be directly associated with the selected space, and nowhere else.


    To remove a bucket

    Simply click on the small red X icon next to the bucket name. This will remove the bucket from the space. Note that the bucket itself remains untouched within AWS.


    To disable Cloudfiles for a Space

    To disable the Cloudfiles section for a space entirely, simply click on the toggle button on the right. In this case the CloudFiles section within pages as well as the space will be hidden, and inaccessible by users.

    If the space has a bucket associated with it, the bucket remains stored with the space, but simply won't be used / accessible.

    If you re-enable the space, the existing bucket will spring back into action.


    NOTE

    If a space used the Global bucket, and you then add a space bucket, none of the existing files are transferred automatically. You will have to take care of that outside of the addon.

    Similarly, if a space was using its own bucket, and you remove the bucket from the space, the space will revert back to the Global Bucket, which will not contain any of the files that were in the space specific bucket.


    USING SIMPLE CLOUD FILES

    Uploading files to a Space

    To upload space related files to your S3 bucket, navigate to the space and select "Cloud Files" from the space navigation bar.



    Once the plugin loads, you'll see a grid with existing files for this space (if any have been uploaded yet).
    To upload more, simply click the upload button, and select one or more files to upload.



    TIP: upload files via drag & drop

    You can also drag & drop files from your desktop diretly onto the S3 browser, and files will be uploaded automatically. If you drop a file from your desktop directly onto a folder in the S3 browser, it will be uploaded to that folder.

    Uploading Files to a Page

    To upload page related files to your S3 bucket, navigate to a page of your choice, and utilize the "Cloud Files" link in the actions dropdown. This will show you the Cloud Files attached to the page.


    Click the upload button, and select one or more files to upload to the page.



    NOTE

    Pages still retain their ability to have files attached to them directly, which are stored on Atlassian servers. This means pages can have 2 separate sets of attachments. Regular Attachments, and Cloud Files Attachments.

    Creating Folders

    Creating a folder can be done from the Cloud Files toolbar. Click on the folder icon, and a new row will appear within the file list. Enter the name of the folder to create, and hit enter, or click on the + sign.

    Once created, the folder will show up in the file list, and will be marked with a folder icon.

    To navigate to a folder, click the name, and the Cloud Files section will show the contents of that folder.

    To move up a folder, click on the "../" folder entry at the top.

    Moving Files & Folders

    To move a file or folder, simply click the file (or folder), and drag it onto one of the other folders in the S3 browser.

    Note that moving a single file is near instantaneous, while moving a folder can take a while. While the move operation is in progress, you will see a progress indicator letting you know how many files are being moved in total, and how many are already done.

    The addon tries to move 6 files at a time. In our tests, a folder with 900 files took about 60 - 90 seconds to complete.

    NOTE: There is a 1000 File limit when moving folders

    Due to technical reasons, we only allow moving of folders with less than 1000 files in them.

    AWS does not provide us with an actual 'move' operation for S3 objects. As such, in order to move a folder, we need to recursively lookup all the objects in a folder, and then move 1 file at a time. A 'move' operation is actually a copy + delete operation.

    In future iterations, we will be adding a more robust queueing system behind the scenes, which will allow moving larger folders. Until then, you would have to handle these outside of the addon, in the S3 console directly.

    Deleting Files & Folders

    To delete a file or folder, simply click on the trash icon for that file, and confirm the deletion, and the file (or folder) will be deleted.

    Deleting a folder can take a little while, as we have to recursively lookup each object in the folder, and then issue a delete call for up to 1000 files at a time.

    Renaming Files

    To rename a file or folder, click on the pencil icon on the right, and the name of the file will change to a text field, allowing you to change the name. When done, click on the green checkmark, and the file will be renamed.

    NOTE: Since AWS does not have a proper Rename (or move API), renaming is actually 2 operations behind the scenes. A copy + delete operation. We copy the file to the new name, and then delete the original.
    Renaming a folder follows the same principle as moving a folder, and could take a while on larger folders. The same 1000 file limit applies here as well.

    Embedding S3 Content in a Page

    We realize that uploading files to pages and spaces isn't necessarily the easiest way for your users to get to those files. As such, we've added several custom content macros that allow you to embed S3 files into pages, or expose a full S3 Browser in a page.

    S3 Image Macro

    To embed an image file from S3 in a page, use the 'S3 Image' macro. The inserted macro will look like this when editing the page:

    The macro edit dialog allows you to select a single image to be embedded in the page. You can select an image associated with the page, or alternative go up a few levels, and select an image from the space itself.

    Since the selected image could be multiple folders deep, the macro editor comes with a convenient tab that allows you to see which image the macro is currently configured for, and easily remove it.

    Additionally, you can set extra options for the embedded image.

    Option Description
    View Determines how the image should be shown.
    "Full Size" to show the image in its native resolution.
    "Thumbnail" to show the image using the provided size constraints.
    Thumbnail Width The width of the image when using Thumbnail mode.
    Use a blank value if you want the width to be proportionally scaled to the height.
    Thumbnail Height The height of the image when using Thumbnail mode.
    Use a blank value if you want the height to be proportionally scaled to the width.
    Border Determines whether the image should have a border around it
    Vertical Space The amount of space (in pixels) above and below the image
    Horizontal Space The amount of space (in pixels) on the left and right of the image

    NOTE about load times and image sizes

    Even though the options allow you to set the view as "Thumbnail", this simply takes the full size image and shows it as a scaled down version. The Simple Cloud addon does not generate actual thumbnails. As such, if you embed a 10MB image into a page, and try to show it as a tiny thumbnail, the addon will still load a 10MB image.

    S3 Link Macro

    To link to a S3 file in a page, use the 'S3 Link' macro. The inserted macro will look like this when editing the page:

    The macro edit dialog allows you to select one or multiple files, either from the files associated with the page, or if you go up a few levels from the space itself.

    Since the selected files could be from multiple different folders deep, the macro editor comes with a convenient tab that allows you to see which files the macro is currently configured for, and easily remove them.

    By default, file links are rendered in-line, separated by a space. The additional macro options allow you to change that.

    Option Description
    Render mode The default renders links in-line, seperated by a space.
    Using the bulleted list mode renders each file link as a separate bullet.
    Detailed Enabling this option adds extra chrome around each link, rendering them similar to how user mentions are rendered with a box around them.

    S3 Browser Macro

    - selecting a folder - browser options - risks & limitations

    The S3 Browser macro behaves a little different than the other macros. It allows you to embed a full S3 browser in the page,

    This can be used as a convenient way to have a page about a particular topic, that shares files with users right in the context of your page content.

    The path you select becomes the root of the macro (users won't be able to go up above this path).

    Any user with access to the page will be able to see the files in the path you've selected.

    By default, users can traverse folders, see the files, and download them. Using the macro options, you can allow users to perform additional actions like upload or delete files.

    Option Description
    Allow Upload Whether users are allowed to upload files
    Allow Delete Whether users are allowed to delete files
    Allow Folder Create Whether users are allowed to create folders
    Allow Rename Whether users are allowed to rename files


    MISC

    Managing Files outside of Confluence

    In case you're not aware of this, Simple Cloud Files does not restrict you from using your S3 bucket in other ways. It simply connects to your bucket, and assumes that files are stored in a specific folder structure. Beyond that though, you can manage the contents of the bucket however you wish, independent of Confluence.

    This effectively means, you could connect to your S3 bucket through one of the various GUI tools, or even the command line, and manage files. You can browse the folders for each space and page, upload and download files, or move files around.
    Simple Cloud Files will simply pickup the changes the next time a page or a space is loaded.

    NOTE

    Simple Cloud Files expects a specific folder structure. You can move files between folders at will, but do note that the addon will always look for the space files and page files in the corresponding folders. Thus, if you move the folder for page 1234 to another space for example, Cloud Files would not be aware of it.

    On the other hand, if page 1234 has a subfolder named "Specs", and you move files into it for example, then Simple Cloud Files would pick those up, as the folder for the page itself is where the addon expects it to be.