Writing Complete Apps in Electron

We create open-source because we love it, and we share our finding so everyone else can benefit as well.

Writing Complete Apps in Electron

For a few years now, Electron has been my go-to framework for developing desktop applications, and in that time I ran into a few unexpected nuances. Others sharing these issues found they would abruptly move away from Electron because of it. So we will discuss everything you need to write your apps in Electron, as well as show you some resources and tricks to go with it.

Electron in a Nutshell

Electron is a framework built on the Chrome V8 engine, so your apps in Electron run on a stripped down Chrome browser of sorts. You have the ability to build your app around the engine, both on the lower-level with C++, as well as the pre-built processes using JavaScript. Chances are that you have used a handful of Electron applications without even knowing it. Visual Studio Code, Basecamp, and Discord, all use Electron as their desktop framework. With the exception of VS Code, each one uses Electron to access the app remotely, where VS Code hosts all code locally. With local apps you have the ability to using NodeJS throughout the entire application, which is an option unavailable to a regular web-application. Where you place your app is a good think to consider, but we’ll discuss that later.

So far, Electron should sound fairly cut and dry, but where issues start to arise, lies in the overall structure. To better explain, the base process is made up of the Electron main process, and creates each window in an Electron renderer process. Both of these processes are incredibly different, and have differing functionality, as well as limitations in regards to performance. Until we discuss it later, the most important concept to remember, is that Electron has two separate processes for handling the application.

From Development to Production

When first building an Electron app for production, you may find a lot of different parts of the app breaking. With that, it is important to first understand that all build files are compressed within a packed asar file. Within this file is an environment similar to your development build environment.

Handling Case Sensitive Files

When working with multiple platforms, it is pretty obvious that you’re going to run into issues with text casing. Before you even start the project, it’s best to decide how you plan to handle file and folder naming. This may not seem very important, but once you make your choice, it’s not going to be easy to change with larger projects. I bring this up, because once it comes down to changing casing in files, it will end up causing a drawn out process to change it.

While Linux is the only case-sensitive platform (by default), you will need to reference every file by its exact case-sensitive name in your file imports, you also need to keep in mind that Git is also case-sensitive. For instance, when you change the casing of a file or folder name, you will end up with a commit record for two completely different files, and any time you change the file, you will end up with changes for two separate files staged with each change.

To get around the duplicate file changes, you have to change the name back to the original casing, remove the file by moving to a directory outside the repository, commit the file removal, and then re-create the file with the new casing, and commit that file addition. All of that, just to change the casing of a file. Not only that, you end up losing the ability to directly access the history of that file, because you removed it, and will have to traverse your Git history to see the previous file changes. So, make sure you choose your naming scheme before anything else.

Building for Production

Before discussing production-specific cases, it should be understood that testing Electron features on production builds require much more than simply using the NodeJS production environment variable within app’s development environment. While the production environment is fine for testing your production bundle, it will cause a lot of different Electron features to break, and you could end up driving yourself crazy trying to make them work; most of the breaks are related to external file handling, as it expects to be run from the asar file.

File Handling in Production

When I was first introduced to Electron, one concept that threw me off, was the concept of the Electron resource files. The resources folder is used to provide non-web files like images, docs, and any other file that could not be directly packed into the Electron asar file. For this folder, Electron provides a direct resourcePath object onto the process object, with the only catch being that this path can only be accessed in production. This means your app needs to dynamically change the resource path based on the build environment, assuming you plan to use it.

So when getting ready to use an icon, you may find that the Electron docs show to set the icon using an absolute path, but realistically you will find this to be a problematic practice. If you remember, all external files are in our resourcePath, and that includes our app icons. We can dynamically set an absolute path to access these icons, but it will not take long to find that this method takes a lot more effort to work in Linux, as the mounted path can differ based on the distribution, and can even use a hashed mount path.

Let’s be honest, absolute file path references are a bad idea anyways, so instead of trying to make this work, this can be easily remedied using relative paths instead. Throughout all of my own Electron applications I use the following conditional path setter for all my resources:

conditional resource path

We can use this path to access the app icons, external images, and anything else we need to access within the asar file. If you want an example of the file layout to better understand, you can always view the scaffolding by unpacking a production asar, and observing the file layout.

$ asar extract dist/<platform>/<app name>/app.asar unpacked/

Electron Specific Files in the Asar

We should be using relative paths anyways, so let’s segue into the issues of directly altering files within the asar. Maybe you have data specific to the app, which you don’t want to bring to the attention of the user, or simply want to keep tabs on the files being handled, but no matter what the reason there are times where this functionality can be beneficial. The issue that you will find, is that Linux doesn’t support the ability to write to files within to asar while the app is running; you can read, but cannot write.

One way around this particular problem, is to use the local user temp directory (accessible via os.tmpdir()). Unfortunately, this solution introduces a new issue, as placement of the app file in OSX and Windows portable can cause this path to fail. Making sure the app works everywhere is certainly ideal, so if you can avoid the need to alter files, it’s best to do so, otherwise the temp directory is your best solution.

Icon Handling

Every app should have an icon, and let’s be honest, having that default electron app icon replaced with our own icon, is the badge of honor showing it is an app separate from Electron itself.

To do this, we want to drop all icon images into our resource folder, referencing the OSX/Linux png icon when initializing the BrowserWindow object, but the Windows ICO icon can simply be placed in the resource folder. Tray icons will need to be placed with resource files as well, but you may prefer placing them inside an images folder within the resources folder, and then referencing that path when initialing your Tray object.

Icon Styling

Using a single icon for each platform is a pretty bad idea. Doing so will show that the icon will look ok on one platform, but horrible on another. Usually the case is that a platform like OSX or Linux, you will need to follow a style guide to know how to make that icon looks like it belongs there. I found the best practice was to follow Apple’s styling guidelines for icons with my OSX and Linux app icons. Follow the guidelines for your base icons, and then use electron-icon-maker to create all the sizes you need. On the other hand, with Windows you only need a square icon with a near lack of padding for it to look decent (It’s not very picky).

Frontend Styling

No matter how you look at it, Chrome is always different on every platform. So when you want to create a multi-platform application with Electron, you expect the application to look the same across each one. Unfortunately this doesn’t happen, so you will find that Windows and Linux will display the frontend differently, regardless of the using the exact same css across each application. For the most part it all comes down to each OS having platform specific UI elements, which are never going to be the same, while also using different iterations of fonts which can also throw the size off. If you use the frameless option for OSX, this can also cause some differences.

This can all be annoying, but the reality is that you can work around it with CSS. In my particular scenario, where I was using frameless for OSX, I found that hiding the scrollbars would fix the UI differences in Windows/Linux, then dynamically altering the app height when initializing the BrowserWindow object was all that was needed to match the height across all platforms. Otherwise it was all about making sure CSS styling was reset to be as uniform as possible, though at first glance you can always see the font differences. As long as you use static sizes with pixel counts, or REM counts, you will be fine.

Local and Remote Apps in Electron

Depending on how you host your application, determines the support you can offer your application. The reason for this is due to security concerns, which could potentially put your users in danger. Here we will go over the reasons why this matters so much, and why the way you structure your application makes such a big difference.

NodeJS Support

When you plan to use Electron as locally hosted application, you have the ability of adding NodeJS compatibility to your application, which can give you a wider availability of options when writing your application. While there are some structural concerns that come with this compatibility (see Selective Automation), I would highly recommend this option if you plan to write a fully-fledged application, especially if it requires the functionality that comes from the NodeJS API, or irreplaceable NodeJS modules.

To enable node integration within Electron, you will need to add the following to each BrowserWindow object you initialize within Electron:

  new BrowserWindow(Object.assign({
    webPreferences: {
      devTools: process.env.NODE_ENV === 'development',
      nodeIntegration: true
  }, browserWindowOpts))

Hosting Remote Applications

One of the great advantages of using Electron to serve a remote application, is that you can beark free of the restriction of using JavaScript. For instance, Basecamp uses Electron, but being remote, the app itself is handled by Ruby on Rails. Because of this, using Electron to host a remote SPA (Single Page Application), is a common practice.

Unfortunately, when using utilizing Electron this way, there are several security concerns to be aware of, the most important being the protection of remote code execution. With that, it is extremely important that you become familiar with the Electron Security documentation.

Note: When developing an application with the Webpack Development Server, you will find security warnings from Electron in the console, but if the end product loads a local file in production, you can ignore these warnings.

To avoid issues with remote applications, one important concept is to use the sandbox feature, so code execution is isolated in the application. Second, is that the remote host is required to use SSL encryption for the transport layer. Lastly, avoid any sort of NodeJS Integration with remote applications; leaving it enabled will give a malicious attacker the ability to interact directly with the local system, a feature you certainly do not want to offer.

If you cannot avoid access to the NodeJS API, use the preload script option to inject your own local API exposing the modules you need, without exposing the entire NodeJS API. If you choose to use preload, it’s worth checking out the contextIsolation option, so that the Electron API is separate from the window and document scope, making it out of reach as well.

Prepping Your Bundles for Production

Once you get into writing your Electron app, you need to understand that you are theoretically writing at least two applications. First, you are writing the application you want the user to interact with, and then the Electron Main Process script, which will interact directly with Electron to present your application to the user.

JavaScript Support

Before you start, Chrome V8 does not fully support ES6/ESNext, so you are going to need to write the Electron Main Process script with ES5 module imports/exports, and it does not support hot-reloading.

Some developers setup an Electron-based Webpack config with HMR to get around this, but realistically it is overkill. Once you write the Electron portion of your application, you will find you rarely change it after that point. If reloading is bothersome, you can use Nodemon. To be completely honest, I can count the number of times I’ve changed my main process Electron script over a year’s time, on a single hand; it simply doesn’t get changed enough to justify a fully-fledged Webpack development server setup, all for some syntactical sugar.

On the other hand, your frontend application will always justify a Webpack setup, and all JS/Electron boilerplates will have the environment setup this way. This particular code will be used within the Electron Renderer process, so you will want to make sure it’s not too top-heavy, as the Electron renderer process simply isn’t made for intensive processes.

When bundling for production, you want to use a separate webpack configuration for production, which will strip out all of the comments, bundle the required JS modules, and transpile it all down to ES5. UglifyJS is still the defacto standard when searching for this functionality, but the reality is, if you are transpiling ES6/ESNext, you want to use the Terser Webpack Plugin to accomplish this.

Obfuscation the Open-Source Transpile-Compile

It has never been uncommon for me to see creators wanting to make their Electron app as private as possible. While due to the licenses you cannot make your app closed source, you may not want to give the world access to your IP. In that case, you can use obfuscation to make traversing the code a living nightmare, making reverse-engineering the only real possibility for anyone wanting to reuse your code.

To make this happen, use the Webpack Obfuscator plugin, but only in your production configuration; using this plugin in development will make debugging errors worthless. To get an idea of what is happening to your code when obfuscated, open your bundle after a production bundle. All variable names are renamed to a generic naming scheme, parameters changed to hex codes, and when possible variables are also converted.

The only data that is really retained are hard-coded strings. This means if you use redux in your application, all of the reducer action names are retained. If you use action name constants to handle your action name strings, you will see every single one of those constant values hoisted to the top of the bundle. If you want to be less transparent about your actions, it’s best to get rid of the constants, or use hashed strings, so it makes sense in development, but have arbitrary hoisted values:

TODO_ACTION_EDIT   = '0x0002'

Handling Local Storage

If you remember, we discussed earlier that writing directly to files within the asar is out of the question for the Linux platform, so if you want to retain the state of your application, you will need to resort to using local storage. This particular solution works fine, but like anything else, you need to be aware of how to handle it.

Local storage is nothing more than basic site storage, so any data you save will go right into the user’s user data storage, and in plain text. This means if your application uses any sort of sensitive data, you will need to encrypt the data before it is saved to local storage. I personally like to use crypto-js to encrypt my state data, and using a hardcoded private key to encrypt, and decrypt the data.

You may be asking yourself, “Wait, did you say a hardcoded key? Won’t that show in the bundle? Wait, that’s a major security flaw?” Of course, and that’s why I brought this up, since the string data is retained in obfuscation, you want to break up the cipher key into multiple constants, and assemble them before the encryption/decryption process:

import 'Crypto' from 'crypto-js'

const secret = 'private'
const secret2 = 'cipher'

// secret and secret2 make up the full private key
export default class Crypt {
  static encrypt = data => {
    return Crypto.AES.encrypt(data, secret + secret2).toString()

  static decrypt = data => {
    return data == undefined ? '' : Crypto.AES.decrypt(data.toString(), secret + secret2).toString(Crypto.enc.Utf8)

With hundreds of thousands of variables in your application and even more when adding modules, it would be close to impossible for anyone to extract and re-assemble the two parts of the cipher, making this a great practice when the scenario is unavoidable, otherwise you can use other practices, but with less persistence.

If you want to take the hardcoded cipher encryption to a super-paranoid, tinfoil-hat level of precautionary measures, you can split the key into even more parts, as well as assign the different values to different sections of your application, causing the values to end up in different parts of the bundle when they are inevitably hoisted. You may want to take advantage of the ES6 Temporal Dead Zone, assigning your values using a let statement, or const statement, causing the values to remain uninitialized until accessed by the runtime, while avoiding variable hoisting. You can also place your let/const values within their own separate closures, causing them to embed themselves even deeper into the bundle, or you can place everything in a class object, setting the values to default props, or directly as member props, generally returning the same result.

Working with Local Storage

One downfall to working with local storage, is that the same storage is used across all environments. If you have worked with multi-environment JS apps, you know you want to test each environment independently. Unfortunately, local storage is shared across all environments, so the state you save in development will show in production. To get around this issues, I set my localStorage middleware to save state based on the environment:

const Environment = process.env.NODE_ENV === 'production' ? 'production' : 'development'

export const loadState = () => {
  let config = JSON.parse(localStorage.getItem(`${Environment}-config`))
  let status = JSON.parse(localStorage.getItem(`${Environment}-status`))

  let persistedState = {}

  config !== null ? persistedState['config'] = config : null
  status !== null ? persistedState['status'] = status : null

  return persistedState

export const saveConfig = state => {
  const configState = JSON.stringify(state)
  localStorage.setItem(`${Environment}-config`, configState)

I found that a lot of developers tend to save all state, all the time, so when the app is restarted, it starts exactly where it left off. We are no longer working with pure web-apps, so now we have to remember that users will have the ability to close and refresh the app (assuming the menu option is enabled), and that their network can interfere with how the app accesses external services. So, if any API calls to an external service are interrupted, and the state is saved and reused, it can make the app feel very unnatural. For instance, let’s say an API call is in the middle of a response when the app closes. When it starts back up, it continues to wait for a response for a request that is no longer active.

To get around this issue, I found using a save state middleware to save the state when certain redux success actions were called, made for a great method of persisting state around API calls:

import { saveConfig, saveStatus } from './localStorage'

const middleware = () => store => next => action => {
  let currentState = store.getState()

  // always save this state

  switch(action.type) {
      // only save config when saved, or it could
      //   persist even when the user clears it

The whole idea is to make it act like a real desktop application, and when we selectively handle state, we get closer to emulating this. If we ignore this detail, we only cause it to act like another web-app.

Building Apps in Electron

Depending on your application needs, you may find you prefer to use the feature-rich electron-builder to build your Electron projects. Depending on your needs, it may be overkill, but when writing local-hosted Electron apps, you will want a builder that gives you to choice of specifying the build and resource files, sign and notarize binaries, as well as upload all published builds to a separate GH project. Overall, it is a good idea to consider, especially when you plan to build for multiple platforms, as one of my favorite features is having a single configuration in my package.json, which I can use to build all binaries on all 3 major platforms.

package.json and the Electron Builder Build Settings

  "scripts": {
    "dist": "electron-builder",
    "pub": "electron-builder --p always",
    "build-icons": "electron-icon-maker -i ~/icon_location/icon.png -o assets/app-icon/",
    "postinstall": "electron-builder install-app-deps"
  "dependencies": {
  "build": {
    "appId": "com.electron.appName",
    "productName": "appName",
    "afterSign": "./scripts/notarize.js",  // used for OSX notary
    "afterPack": "./scripts/afterPack.js",
    "publish": [
        "provider": "github",
        "repo": "some-repo",
        "owner": "GHuser",
        "private": true,
        "releaseType": "release",
        "publishAutoUpdate": true
    "copyright": "Copyright Author",
    "files": [
    "directories": {
      "buildResources": "assets/app-icon"
    "extraResources": [
    "mac": {
      "category": "public.app-category",
      "darkModeSupport": true,
      "entitlements": "build/entitlements.mac.plist",
      "gatekeeperAssess": false,
      "hardenedRuntime": true,
      "target": "dmg"
    "dmg": {
      "artifactName": "${productName}-${version}_OSX.${ext}",
      "contents": [
          "x": 110,
          "y": 220
          "x": 420,
          "y": 220,
          "type": "link",
          "path": "/Applications"
      "sign": true
    "linux": {
      "category": "menu-category",
      "target": [
      "artifactName": "${productName}-${version}_Linux.${ext}",
      "desktop": {
        "Name": "appName",
        "Terminal": false
    "win": {
      "icon": "assets/app-icon/win/icon.ico",
      "target": [
          "target": "nsis",
          "arch": [
    "nsis": {
      "artifactName": "${productName}-${version}_Win-Setup.${ext}"
    "portable": {
      "artifactName": "${productName}-${version}_Win-Portable.${ext}"
    "buildDependenciesFromSource": true

The above shows a great amount of the functionality available through Electron Builder. With it you will find the publish section, which gives the ability to automatically publish releases to a GitHub repository, a configuration for each platform, an option for files, build files, as well as extra resources, the option to build native dependencies, the ability to name each build, as well as script hooks to run processes around the build process itself. I won’t go over each one of these options here, but seeing what’s here will give you an idea of what’s available and easily cross-reference new options in the electron builder configuration docs. The most important settings to be aware of, are the options to include files. If you want to include any external files with your application, the extraResources is where to get it done. The other file options are specifically for the build itself, which you can see from the file and buildFile settings. Just remember which is which, and reference them accordingly within your app.

Anyone familiar with these settings may notice there is a name for both the Windows nsis, and portable builds, but no setting for the portable build. Unfortunately, there is not an option to build both nsis and portable at the same time, so after one build is done, the Windows target needs to be altered before building the other.

Using the Tray with Caution

Once you see the option to use the Electron TrayMenu, you may jump on the opportunity, but with it comes a common issue, your app may not close when it appears to do so. If you assume your users see the tray menu icon, and will shut the client down from that icon, it will not be long before you have a change of mind. One of the biggest issues that comes from this, is that another Electron client can be started on top of the existing instance, but if you are loading state from local storage, you will find it fails to load on subsequent clients.

The biggest issue exists when the main window is closed, but leaving the client open in the Tray. Depending on the platform, this can happen depending on how the app was closed. To avoid this issue, we can listen for the close event, and destroy the window and close the app anytime the event is triggered.

main.js (Electron)

  const windowList = []
  let win = BrowserWindow(windowOptions)


  win.on('close', e => {
    windowList.splice(windowList.indexOf(win), 1)


Code-Signing With Your App

If you have ever downloaded an app, and received an error regarding the origin of the application when opening that app, you have run into an application that has overlooked code signing. A matter of fact, on OSX, when opening an unsigned application, users are given one of two options, Cancel, or Move to Trash. To open the application requires the user to open the System Preferences, and explicitly allow the app via the Gatekeeper. So annoying! If it’s not already obvious, code-signing is a common practice you want to exercise with each of your applications, not only to show your users you value their security, but to all keep from annoying them. Luckily the process of code-signing is extremely simple with electron-builder, so there is little excuse for overlooking this.

With OSX, code-signing capability requires you to have an Apple Developer Account, which is a mandatory $99/year. Of course, the more apps you have, the more the cost is justified. You will also need a developer account to notarize your application after signing, that is if you ever want to get rid of the Malware error in OSX versions Catalina and later.

Portable Builds

You may not appreciate the power of a portable build, until the moment you encounter a user who wants to use your app at work, where installations are forbidden. Electron builder offers everything you need for portable builds. If your use DMG for OSX and AppImage for Linux, you are already set no matter what route you want to take, and you have a portable target for Windows.

With building taken care of, you need to make sure your users can use the app anywhere on a given system, so if your app is looking to access files that are not included within the application, you will need to workaround the need for accessing them. This includes accessing the user data folder, as in some cases with OSX and Linux, the system cannot tell your application who the local user is, or where those directories exist. In cases like that, you want to start using public system resources like the temp directory for your file stores (os.tmpdir).

Testing and Debugging Electron

No matter what you create in Electron, you are going to want to test your application one way or another, both functionally and automatically. Let’s start with the necessities, and add some extensions to go with the V8 engine.

The devtron extension, is a must-have for any Electron developer, and can be installed via yarn. Next, if you need to use a Chrome extension to debug within you app, like the React Dev Tools, you can use it from your local Chrome installation. We do this within the Electron main.js starter file, and we will start by adding an version-detecting link to the Chome extension, that way we never have to update the link when the extension is updated. After that, we enable Devtron:

const path = require('path')
const fs = require('fs')
const os = require('os')


if(process.env.NODE_ENV && process.env.NODE_ENV === 'development') {
  let devtoolsExt

let extensionHash = 'fmkadmapgofadopljbjfkapdkoienihi/'

    switch(process.platform) {
      case 'darwin':
        devtoolsExt = path.join(os.homedir(), 'Library', 'Application Support', 'Google', 'Chrome', 'Default', 'Extensions', extensionHash)
      case 'win32':
        devtoolsExt = path.join(os.homedir(), 'AppData', 'Local', 'Google', 'Chrome', 'User Data', 'Default', 'Extensions', extensionHash)
      case 'linux':
        devtoolsExt = path.join(os.homedir(), '.config', 'google-chrome', 'default', 'extensions', extensionHash)


The extension hash shown is for the React Dev Tools, so change to the extension you desire, and the trailing backslash is required for the following readdir method call.

If you are the only developer on the project, and only use a single platform for the development process, you can strip out the extra platforms, and use an absolutely path for the devtoolsExt; there is no need to be frivolous when dealing with our resources, especially in development. If you use any devtool that caches debug information, you will want be as performant as possible. As an example, with React/redux and the Redux Dev Tools, the devtools cache every action for the rollback feature, so slowdowns can happen quickly during development, giving you an idea of how sparse the renderer resources are.

Debugging with Devtron

We have Devtron installed and setup now, but you may ask, “why do we need this?” When working with Electron, all debug functionality within the V8 process comes from Devtron. So, when you want to keep an eye on the different calls between your Electron processes, or want to watch Electron event listeners, Devtron gives you that power by giving you a viewer for Event Listeners, and the ability to record IPC calls. This means any time your processes communicate, you have the ability to debug that traffic, making sure it’s doing what it should. Most importantly, it also gives you direct event handlers for crashes, hangs, and exceptions on the Electron process itself. It also gives a require graph, but adding the webpack-bundle-analyzer to webpack would be much more productive.

Automated Testing

If you are a programmer who built their practices around agile development with continuous integration, or even if you are someone who was introduced to the testing practice way back in the day with the book eXtreme Prrogramming, then there is no way you would let an app exist without tests wrapped all around it. I could go all day about how your app is a recipe for disaster without them, but for now I will refrain, and instead discuss what Electron does to make this process difficult.

I would like to say there is a good reason to test the main process, but to be completely honest, for most apps, it’s really not worth it. The majority of the code within the Electron script is all from Electron, rarely touched once it’s written, and mostly static, so there is little to test. It’s only when you add IPC listeners which end up calling functions with nothing to do with Electron, that you want to test, and even then, you can use a unit-test or two to test those individual functions.

What you really need to be aware of, is handling the tests around the biggest part of the application, and that’s the app within the renderer process(es). We can test these like we would any other application we would write using whatever framework, subset, or language you decide to use that inevitably compiles to JS. You may remember earlier, when I mentioned there was an issue with testing when the NodeJS integration was used. The problem arises when we decide to test our components, and that component has a NodeJS with it. You have to remember, Electron is it’s own entity in this regard, so if you want to run end-to-end tests (a.k.a. e2e, integration) on that component, you will find it will fail due to the NodeJS module.

To get around this, you can use Spectron, a chrome-driver/web-driver alternative for Electron, allowing you to test using e2e tests, but hold on, can you imagine running hundreds, maybe thousands of integration tests? You know how long this is going to take, and it’s going to drive you crazy; it simply isn’t feasible. What if I told you I never used an Electron-based test runner for my tests? Let’s discuss how, and why.

Selective Automation

The biggest issue with testing with Electron, has always been to work around the NodeJS modules, so instead of going the easy route of simply throwing more at Electron, we can take a more strategic and structurally sound route. To do this, we need to be more explicit about where our NodeJS modules are imported and reside. The biggest problem is breaking the restrictions that comes from the modules, and with that, we can restrict NodeJS module imports to the lowest level components which only deal with logic. Once it comes to testing those components, I can mock the modules, allowing tests to avoid these modules, and the errors that entail. Once it comes to the visual components, if you find one needs to make use of a NodeJS module, the method needed is passed via props from the root component using a closure, allowing the visual component to call it, keeping it from ever being directly associated with that module.

So why should you use the root component specifically? Surely there are other ways to go about it, right? Using the root not only allows you to run unit-tests on the visual components, but keep them free to run e2e tests with whatever test runner you prefer. So if you like the speed and easy of Cypress, you are free to do so. We’re not done yet, if you are like myself, and like to use UI component libraries like Storybook, or StyleGuidist, you can use every single one of your visual components to do exactly that! The point being, we put the modules somewhere they cannot restrict us from doing anything.

Again, we could use Electron-based test runners, but without precautions of module placement, we end up making them to be mandatory, slowing down our tests significantly. If you are using CI, you could add several minutes to each job, not good. On another note, the Cypress has been working on an Electron-based implementation of Cypress, which would certainly bring a more comfortable and quick environment to Electron e2e testing, and speedy test runner to the Electron environment. One big selling point is the ability to use it for one long e2e spec to test the entire app all at once, and quickly! If you are also interested in this project, check out the following post on their blog, and show your support on the following GitHub issue on the Cypress Electron branch.

Making the Most of Your Apps in Electron

Updating Electron

Allowing your app to offer automatic updates is a feature that offers great convenience to your users, but also requires careful attention. Using the publish option from Electron Builder and enabling the publishAutoUpdate option is half of what you need. The rest is setting up the Electron Builder Auto-Update events. After that point, it is important to point out that you will need to test updates with a production build. While you can test the events to an extent in development, it requires the auto update xml files in the app build itself, something you cannot achieve outside a production build.

Performance Handling

When it comes to handling resource intensive processes, you may find yourself instinctively placing your resource heavy scripts within the Renderer process, with the rest of the code. Unfortunately, doing so will quickly prove to be a taxing move on the performance. Similar to performance for a web-app in a browser, the renderer process is really only there to display, but not to handle a heavy functions as well, so adding such processes will visibly hinder performance. Keep an eye on the performance monitor within the dev tools, and if you see a function that’s too intense for the renderer, it may be time to move it somewhere else.

By default, larger scripts that demand large amount of the CPU and memory are better run within the main process. As long as the script is non-blocking, Electron will keep the script from interfering with the renderer process performance, by running in the background. The major downside comes with the need for the Inter-Process Communication API, to pass the data to and from the main process and back to the renderer. If you are passing a lot of data back and forth, this could become extremely inconvenient.

If placing your script in the main process isn’t feasible, you also have the option of placing that particular script within a Web Worker. These will place your script into its own NodeJS process, separate from Electron, meaning they lack access to the Electron APIs (obviously). With this method you will make your app a multi-threaded application, allowing memory corruption, and race conditions, but can be avoided with Thread Safety concerns.

Multi Window Applications

It is certainly easy to create a multi-window application app in Electron, but what if you want to communicate between these windows? Well, here is where we run into some short-comings with the V8 Engine. First off, there is no way to directly communicate between one window and another. Instead, we have to communicate through the main process, and have the main process contact the other window, all while keeping track of the ID of the calling window, so we know where to return the response.

To accomplish this task, we need to create our own message passing system, which you can learn how to write in our Window to Window Communication in Electron.

Native Modules

One of the most intimidating features is building native modules, but is actually an easier concept than it’s initially perceived. The most important concept to keep in mind, is that native modules are incredibly rewarding additions, bringing great power to your applications, and it is good to understand that you will find existing JS modules include native modules.

Of course the need for native modules is all that common, unless you plan to write your own C++ modules. So, unless you are familiar with building C++ modules in general, I wouldn’t recommend digging too deep into this process. Regardless, let’s take a look at some common requirements.

While it sometimes works out flawlessly, most failures are due to dependency changes getting in the way. To avoid this, use the postinstall script shown in the Electron Builder configuration to sync application dependencies any time a dependency package changes.

"scripts": {
  "postinstall": "electron-builder install-app-deps"

When it comes to the actual module building, there are several different solutions to choose from, and would highly suggest reading the Electron doc on Using Native Modules before anything else. The idea is that we are building platform-specific modules for each given platform, so we can offer these additional features across multiple platforms. Another common problem is building on a version of NodeJS that is incompatible, usually because it’s too new. If you can stay away from bleeding-edge versions of NodeJS, you can usually keep common build errors at bay.

The most obvious option is to build modules manually using node-gyp, and requires node-gyp to build.

$ yarn global add node-gyp

When you find yourself in a situation where a native module simply doesn’t work in your application, your first step should always be to rebuild electron, making sure it’s not a mixup of version dependencies. So install that locally for future use:

$ yarn add electron-rebuild -D

If your plan is to only grab and build pre-made native modules, the easiest option for accomplishing this is to use the node-pre-gyp package, which will help you install prebuilt native binaries, but has a limited selection of modules.

Introducing WASM

While native modules are great, there is another venue for introducing native modules, and that is through the use of WASM (WebAssembly). WebAssembly, originally standardized by the main browser companies in 2016, was the answer to breaking the limits of browsers, allowing CPU/GPU heavy applications to be run within the browser. If you are familiar with the Unity game engine, you may be familiar with the ability to build games using WebGL in Unity 2018.1 and beyond, an engine which uses WASM to allow graphic-intensive games to be played within the browser. Check out the Unity’s Angry Bots demo.

Note: Electron also offers WebGL support, it’s just a matter of flipping the switch in the BrowserWindow options.

Aside from the ability to access the system on a lower-level, the performance boost is also a great benefit of using WASM for parts of your application. When using WASM, it completely bypasses the JavaScript interpretor, in turn creating less of an overhead when being run. Of course, this doesn’t mean you need to completely move all your code over to WASM, you can instead import your WASM, and use it like any other JavaScript class object.

Now for the more intimidating portion, the development process. When it comes to development in the Rust language, it is surprisingly pleasant for a static language. Not only is it a lot more forgiving when it comes to building native modules, it is outstanding about giving legible errors:

To try it out, the following tutorial will guide you through the process of creating a WASM app in Rust, as well as including it into a web-application. If you want to incorporate that application into an Electron app, take a look at this Electron-WASM example project. Really, the process is a lot easier that one would think, and a whole lot easier than building node native modules.


I know there is a lot to take in here, but Electron is more involved than one would originally imagine. There is so much more to discuss with Electron, and maybe I will discuss those special use cases at a later time, but for now, these are the most common use-cases to be aware of when working with Electron.


No Comments

Add your comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.