S3 Uploads in NodeJS

We create open-source because we love it, and we share our finding so everyone else can benefit as well.

S3 Uploads in NodeJS

Uploading files to S3 using the AWS-SDK is fairly straightforward. Still, I found there are very few examples to show this with S3’s upload SDK. Of course with S3 uploads, we also want to look at how to track progress, as well as how to cancel our uploads. So in this article I’m going to show an example of doing all of this with the AWS-SDK’s S3 upload function.

Adding S3 Uploads

Let’s look at our s3 upload setup. We of course need to authenticate with AWS, then we can upload our object.

let s3

function authenticate(access, secret, region) {
  return new Promise((resolve, reject) => {
    s3 = new AWS.S3({
      accessKeyId: access,
      region: region,
      secretAccessKey: secret
    })
    s3.listBuckets({}, (err, _object) => err ? reject(err) : resolve(s3))
  }).catch(error => {
    reject(error)
  });
}

uploadFile(bucket, fileName, file): Object {
  let params = {
    Bucket: bucket,
    Key: fileName,
    Body: fs.createReadStream(file)
  }

  let upload = s3.upload(params)
}

So far we have the bare minimum to upload files. We authenticate using our authenticate function, passing our credentials, and then use a listBuckets call to test the credentials. If we have access to listing buckets and it returns an error, we know there was any issue with authentication. This isn’t mandatory, but it acts as a good way to throw an error when authentication fails when the credentials are often dynamic.

In our uploadFile function, we set our params which is the name of the bucket we want to upload to, the fileName we want to give the file we are uploading. Lastly we pass the file itself, in the form of a relative or absolute path to the file being uploaded. We use that file path to stream the data to our upload function. With what we have here now, we can upload files, but we won’t see the progress or errors.

I do want to note that we can use a promise with this, but it does make the abstraction of cancellations a bit more troublesome. For now, we will handle the upload manually.

Monitoring Upload Progress

Now let’s quickly look at how to monitor our upload. When we start our upload, we have access to a new ManagedUpload object that will return this data for us. In our uploadFile function, let’s check the progress and add the default error and response callbacks.


uploadFile(bucket, fileName, file): Object {
  let params = {
    Bucket: bucket,
    Key: fileName,
    Body: fs.createReadStream(file)
  }

  let upload = s3.upload(params)
  upload.send((err, response) => {
    if err
      console.error(err)
    console.log(response)
  })

  upload.on('httpUploadProgress', progress => {
    console.log(JSON.stringify(progress))
  })
}

Using the send callback we get access to the common error and response callbacks. We will only see these callbacks called when the upload has finished or errored. In order to see progress during the upload, we need to use the on listener and listen for ‘httpUploadProgress’. This will return a progress object after each chunk, returning a loaded and total byte count.

Canceling S3 Uploads

Now that we have all of the parts to monitor our upload, let’s look at how to cancel our uploads. The ManagedUpload object offers an abort function when we want to cancel our upload. If you want to have this ability on the frontend, you may already see an issue. We don’t want to block the process, and loop during upload to handle the abort within the function itself. For instance, within a declarative framework like React, you may want to start the upload asynchronously, and have a separate kill switch. In that case, we can abort in another function.

let upload

uploadFile(bucket, fileName, file): Object {
  let params = {
    Bucket: bucket,
    Key: fileName,
    Body: fs.createReadStream(file)
  }

  upload = s3.upload(params)
  upload.send((err, response) => {
    if err
      console.error(err)
    console.log(response)
  })

  upload.on('httpUploadProgress', progress => {
    console.log(JSON.stringify(progress))
  })
}

cancelUpload() {
  upload.abort()
  upload = null
}

Since we’ve moved the ManagedUpload object to parent scope, we can give access to cancel our upload asynchronously. This is where a promise becomes problematic, as we would be unable to share our ManagedObject in an outer scope, without promise lifting.

Adding Control to React and Redux

While we slap this directly into an async Redux function, that could get a bit messy. Instead let’s focus on separating the S3 functions to maintain a reusable API. So all we need is a bridge between the start, progress, and end emitters, and we have everything we need to control our uploads. I’ve managed to handle this in Redux by using a custom event listener between the SDK and Redux. For example, Here’s the S3 API:

let s3, upload
export const uploadEmitter = new EventEmitter;

function authenticate(access, secret, region) {
  return new Promise((resolve, reject) => {
    s3 = new AWS.S3({
      accessKeyId: access,
      region: region,
      secretAccessKey: secret
    })
    s3.listBuckets({}, (err, _object) => err ? reject(err) : resolve(s3))
  }).catch(error => {
    reject(error)
  });
}

uploadFile(bucket, fileName, file): Object {
  let params = {
    Bucket: bucket,
    Key: fileName,
    Body: fs.createReadStream(file)
  }

  upload = s3.upload(params)
  upload.send((err, response) => {
    if err
      uploadEmitter.emit('error', err)
    uploadEmitter.emit('end', response)
  })

  upload.on('httpUploadProgress', progress => {
    uploadEmitter.emit('progress', progress)
  })
}

cancelUpload() {
  upload.abort()
  upload = null
}

In Redux we use the event emitter’s listeners to trigger our state updates within an async function like so:

export function uploadObject(bucket, file) {
  return dispatch => {
    dispatch(uploadObjectStart(file))
    uploadEmitter.on('progress', progress => {
      dispatch(uploadObjectProgress(progress))
    }).on('end', out => {
      dispatch(uploadObjectSuccess(file))
    }).on('error', error => {
      if(!error.message.match(/Request aborted by user/)) {
        dispatch(errorCreate(error, 'Put Object: '))
      }
    })
    uploadFile(bucket, file, path.basename(file)
  }
}

export function uploadObjectStart(uploadFile) {
  return { type: 'UPLOAD_OBJECT_START', uploadFile }
}

export function uploadObjectProgress(progress) {
  return { progress, type: 'UPLOAD_OBJECT_PROGRESS' }
}

export function uploadObjectSuccess(uploadFile) {
  return { type: 'UPLOAD_OBJECT_SUCCESS', uploadFile }
}

export function uploadObjectCancel() {
  cancelUpload() // this is bad practice, and can instead be called in a componentDidUpdate from this action
  return { type: 'UPLOAD_OBJECT_CANCEL' }
}

You may also notice the condition in the error catcher, and this is to ignore the error thrown by cancelations. They are intentional, right?

Just for the sake of being complete, here’s what our reducer filters could look like:

const uploadState = {
  uploading: false,
  uploadComplete: false,
  uploadFile: '',
  totalLoaded: 0,
  totalSize: 0
}

export default function uploadReducer(state = uploadState, action) {
  switch (action.type) {
    case 'FILE_UPLOAD_START':
      return {
        ...state, totalLoaded: 0, totalSize: 0, uploading: true, uploadComplete: false, uploadFile: action.uploadFile
      }
    case 'FILE_UPLOAD_PROGRESS':
      return {
        ...state, totalLoaded: action.progress.loaded, totalSize: action.progress.total
      }
    case 'FILE_UPLOAD_CANCEL':
      return {
        ...state, totalLoaded: 0, totalSize: 0, uploading: false, uploadComplete: false, uploadFile: ''
      }
    case 'FILE_UPLOAD_SUCCESS':
      return {
        ...state, totalLoaded: 0, totalSize: 0, uploading: false, uploadComplete: true, uploadFile: ''
      }
    }
    default:
      return {
        ...state
      }
  }
}

With the above state, your app knows uploads are complete when uploadComplete is true, and canceled when both uploading and uploadComplete are false. When you want to calculate a percentage of your progress, just use the following in your progress bar value:

percentage: ((totalLoaded / totalSize) * 100).toFixed(2)

And that’s a simple way to setup S3 uploads, and integrate them in Redux as a reusable API.

If you liked this article, or want to see more articles like this, please comment below. Until next time, happy coding!

 

No Comments

Add your comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.