request / response

a blog about the web, Go, and building things

(by Matt Silverlock)

From Firestore to BigQuery with Firebase Functions


In building my sentiment analysis service, I needed a way to get data into BigQuery + Data Studio so I could analyze trends against pricing data. My service (on App Engine) uses Firestore as its primary data store as an append-only log of all analysis runs to date.

The flexible schema (especially during development), solid Go client library & performance story were major draws, but one of the clear attractions was being able to trigger an external Firebase Function (Cloud Function) on Firestore events. Specifically, I wanted to get the results of each analysis run into BigQuery so I could run queries & set up Data Studio visualizations as-needed.

I wrote a quick function that:

  • Triggers on each onCreate event to Firestore
  • Pulls out the relevant fields I wanted to analyze in BigQuery: counts, aggregates and the search query used
  • Inserts them into the configured BigQuery dataset & table.

With that data in BigQuery, I’m able pull it into Data Studio, generate charts & analyze trends over time.

Creating the Function

If you haven’t created a Firebase Function before, there’s a great Getting Started guide that steps you through installing the SDK, logging in, and creating the scaffolding for your Function.

Note: Firebase Functions initially need to be created & deployed via the Firebase CLI, although it sounds like Google will support the Firebase-specific event types within Cloud Functions & the gcloud SDK (CLI) in the not-too-distant future.

Within index.js, we’ll require the necessary libraries, and export our sentimentsToBQ function. This function has a Firestore trigger: specifically, it triggers when any document that matches /sentiment/{sentimentID} is created (onCreate). The {sentimentID} part is effectively a wildcard: it means “any document under this path”.

const functions = require("firebase-functions")
const BigQuery = require("@google-cloud/bigquery")

exports.sentimentsToBQ = functions.firestore
  .onCreate(event => {
    console.log(`new create event for document ID: ${}`)

    // Set via: firebase functions:config:set centiment.{dataset,table}
    let config = functions.config()
    let datasetName = config.centiment.dataset || "centiment"
    let tableName = config.centiment.table || "sentiments"
    let bigquery = new BigQuery()

We can use the Firebase CLI to override the config variables that define our dataset & table names as needed via firebase functions:config:set centiment.dataset "centiment"- useful if we want to change the destination table during a migration/copy.

let dataset = bigquery.dataset(datasetName)
dataset.exists().catch(err => {
    `dataset.exists: dataset ${datasetName} does not exist: ${JSON.stringify(
  return err

let table = dataset.table(tableName)
table.exists().catch(err => {
    `table.exists: table ${tableName} does not exist: ${JSON.stringify(err)}`
  return err

We check that the destination dataset & table exist - if they don’t, we return an error. In some cases you may want to create them on-the-fly, but here we expect that they exist with a specific schema.

let document = =

let row = {
  json: {
    count: document.count,
    fetchedAt: document.fetchedAt,
    lastSeenID: document.lastSeenID,
    score: document.score,
    variance: document.variance,
    stdDev: document.stdDev,
    searchTerm: document.searchTerm,
    query: document.query,
    topic: document.topic,

The method returns the current state of the Firestore document, which is what we want to insert. The previous state of the document can also be accessed via, which could be useful if we were logging specific deltas (say, a field changes by >= 10%) or otherwise tracking per-field changes within a document.

Note that we define an insertId to prevent duplicate rows in the event the function fails to stream the data and has to retry. The insertId is simply the auto-generated ID that Firestore provides, which is exactly what we want to de-duplicate a record on should it potentially be inserted twice, as our application treats Firestore as an append-only log. If we were expecting multiple writes to a record every minute, and wanted to stream those to BigQuery as distinct documents, we would need to use a different approach.

Beyond that, we compose an object with explicit columnName <=> fieldName mappings, based on our BigQuery schema. We don’t need every possible field from Firestore - only the ones we want to run analyses on. Further, since Firestore has a flexible schema, new fields added to our Firestore documents may not exist in our BigQuery schema.

The last part of our function is responsible for actually inserting the row into BigQuery: we call table.insert and set raw: true in the options, since we’re passing a row directly:

return table.insert(row, { raw: true }).catch(err => {
  console.error(`table.insert: ${JSON.stringify(err)}`)
  return err

As table.insert is a Promise, we should return the Promise itself, which will either resolve (success) or reject (failure). Because we don’t need to do any post-processing in the success case, we only explicitly handle the rejection, logging the error and returning it to signal completion. Not returning the Promise would cause the function to return early, and potentially prevent execution or error handling of our table.insert. Not good!


Deploying our function is straightforward:

# Deploys our function by name
$ firebase deploy --only functions:sentimentsToBQ

=== Deploying to 'project-name'...
i  deploying functions
i  functions: ensuring necessary APIs are enabled...
✔  functions: all necessary APIs are enabled
i  functions: preparing _functions directory for uploading...
i  functions: packaged _functions (41.74 KB) for uploading
✔  functions: _functions folder uploaded successfully
i  functions: current functions in project: sentimentsToBQ
i  functions: uploading functions in project: sentimentsToBQ
i  functions: updating function sentimentsToBQ...
✔  functions[sentimentsToBQ]: Successful update operation.

Deployment takes about 10 - 15 seconds, but I’d recommend using the local emulator to ensure the functions behaves as expected.

Querying in BigQuery

So how do we query our data? We use the BigQuery console or the bq CLI. We’ll use the command line tool here, but the query is still the same:

bq query --nouse_legacy_sql 'SELECT * FROM `centiment.sentiments` ORDER BY fetchedAt LIMIT 5;'
Waiting on bqjob_r1af4578a67b94241_000001618c40385c_1 ... (1s)
Current status: DONE

|          id          |  topic  |        score        | count |
| PSux4gwOsHyUGqqdsdEI | bitcoin | 0.10515464281605692 |    97 |
| ug8Zm5sSZ2dtJXPIQWKj | bitcoin |  0.0653061231180113 |    98 |
| 63Qo2gRgsG7Cz2zywKOO | bitcoin | 0.09264705932753926 |    68 |
| Y5sraBzPrhBzsmOyHcm3 | bitcoin | 0.06601942062956613 |   103 |
| r3XApKXJ6feglUcyG1db | bitcoin | 0.13238095435358221 |   105 |
# Note that I've reduced the number of columns returned so it fits in the blog post

We can now see the results that we originally wrote to Firestore, and run aggregations, analyses and/or export them to other formats as needed.


The Code

For the record, here’s the full function as it is in production at the time of writing:

const functions = require("firebase-functions")
const BigQuery = require("@google-cloud/bigquery")

exports.sentimentsToBQ = functions.firestore
  .onCreate(event => {
    console.log(`new create event for document ID: ${}`)

    // Set via: firebase functions:config:set centiment.{dataset,table}
    let config = functions.config()
    let datasetName = config.centiment.dataset || "centiment"
    let tableName = config.centiment.table || "sentiments"
    let bigquery = new BigQuery()

    let dataset = bigquery.dataset(datasetName)
    dataset.exists().catch(err => {
        `dataset.exists: dataset ${datasetName} does not exist: ${JSON.stringify(
      return err

    let table = dataset.table(tableName)
    table.exists().catch(err => {
        `table.exists: table ${tableName} does not exist: ${JSON.stringify(
      return err

    let document = =

    let row = {
      json: {
        count: document.count,
        fetchedAt: document.fetchedAt,
        lastSeenID: document.lastSeenID,
        score: document.score,
        variance: document.variance,
        stdDev: document.stdDev,
        searchTerm: document.searchTerm,
        query: document.query,
        topic: document.topic,

    return table.insert(row, { raw: true }).catch(err => {
      console.error(`table.insert: ${JSON.stringify(err)}`)
      return err

Windows Subsystem for Linux w/ zsh, tmux & Docker


I recently put together a Windows machine for gaming, and although I still do most of my development on macOS due to a great third-party ecosystem, BSD underpinnings & better programming language support, I decided to see what development life was like on Windows in 2018.

As a spoiler: it’s not perfect, but it’s definitely usable day-to-day. If you’re developing applications that don’t rely on OS-level differences (e.g. not systems programming), you can certainly use a Windows + Windows Subsystem for Linux (WSL) as your only setup. If you’re working with container-based applications, then it becomes even more usable.

I’m going to walk through a setup that gets you up & running with a few staples, namely:

First Things First

You’ll need to enable and install the Windows for Linux Subsystem. Basic familarity with the Linux CLI is also useful here: although this is a step-by-step guide, knowing how to edit text files with vim or nano is going to be helpful.

Hyper (your terminal)

Hyper is a fairly new terminal application, and although it’s not as polished as the venerable iTerm on macOS, it gets the job done. It uses the same underpinnings as the integrated terminal in VSCode (xterm.js), which means it sees regular releases and bug-fixes.

Out of the box, Hyper will use the Windows command prompt (cmd.exe) or Powershell (powershell.exe). In order to have it use your WSL shell, you’ll need to make a quick adjustment.

In Hyper, head to Edit > Preferences and modify the following keys:

    shell: 'wsl.exe',

    // for setting shell arguments (i.e. for using interactive shellArgs: ['-i'])
    // by default ['--login'] will be used
    shellArgs: [],

Note that if you have multiple Linux distributions installed via WSL, and you don’t want Hyper to use your default, you can set the value for shell to (e.g.) 'ubuntu.exe'.

Hyper is extremely configurable, and the awesome-hyper repository over on GitHub includes a long list of themes, plugins and tweaks.

zsh + ohmyzsh (your shell)

We’re also going to set up zsh as our default shell, alongside Oh My Zsh for it’s built-ins, themes and plugins.

First, confirm that zsh is available and installed (it should be, by default):

~ which zsh

And then change your default shell to zsh:

~ chsh -s /usr/bin/zsh
# Enter your password, and hit enter
# Confirm the change
~ echo $SHELL

We can now install oh-my-zsh -

# As per the instructions here:
# NOTE: Don't just install any old program by piping a file into sh. Anything your user can do, the script can do. Make sure you at least trust the source of the script.
~ sh -c "$(curl -fsSL"

Once complete, you can begin tweaking things as per the README


tmux, if you’re not familiar, is a terminal multiplexer. Think of it as a way to run multiple shells quickly-and-easily, either in a grid-like fashion, or via a “tab” paradigm (or both). It’s extremely useful for multi-tasking: edit code or configs in one pane, watch results in another, and tail -f a log in a third.

The tmux version (2.1) available under Ubuntu 16.04 is getting on, and thus we’ll be building our version (2.6, at the time of writing) from source.

# Fetch the latest version of tmux from this page - e.g.
curl -so tmux-2.6.tar.gz
# Unpack it
~ tar xvf tmux-2.6.tar.gz
~ cd tmux-2.6.tar.gz
# Install the dependencies we need
~ sudo apt-get install build-essential libevent-dev libncurses-dev
# Configure, make & install tmux itself
~ ./configure && make
~ sudo make install
# Confirm it works
~ tmux

We’ll also want zsh to create (or use an existing) tmux session if available, so that we’re always in tmux. Let’s modify .zshrc to achieve that:

# open .zshrc in your preferred editor - e.g. vim
alias tmux="tmux -2 -u"if which tmux 2>&1 >/dev/null; thentest -z "$TMUX" && (tmux attach || tmux new-session)fi

We’ll now make sure zsh uses this updated config:

~ source .zshrc

Visual Studio Code

We have a standalone terminal w/ zsh + Oh My Zsh installed. Let’s make sure VSCode uses it for those times we’re using its integrated terminal. We’ll also want it to launch Hyper as our external terminal application, rather than cmd.exe or Powershell.

Open up VSCode’s preferences via File > Preferences > Settings (Ctrl+,) and update the following keys:

    "terminal.external.windowsExec": "%userprofile%\\AppData\\Local\\hyper\\Hyper.exe",
    "": "wsl.exe"

Note: VSCode extensions that rely on background daemons or language servers to provide static analysis, formatting and other features will still use (require) the Windows-based version of these tools by default. There’s an open issue tracking this for Go, but it’s not a solved problem yet.


We’re also going to install Docker, via Docker for Windows (the daemon) and the Docker CLI (the client, effectively) within our WSL environment. This allows us to make use of Hyper-V and maintain good performance from our containerized applications, and avoid the minefield that is VirtualBox.

Once you’ve installed Docker for Windows—which may require rebooting to install Hyper-V, if not already enabled—you’ll also need to allow connections from legacy clients in the Docker settings. Check “Expose daemon on tcp://localhost:2375 without TLS”.

Note that this reduces the security of your setup slightly: other services already running on your machine could MitM connections between the Docker daemon. This does not expose the daemon to the local network, but there does not appear to be a way to retain TLS authentication between WSL and Docker for Windows yet.

# Install our dependencies
~ sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
# Add the Docker repository
~ curl -fsSL | sudo apt-key add -
~ sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) edge"
~ sudo apt-get update
# Install Docker Community Edition
~ sudo apt-get install -y docker-ce
# Add your user to the Docker group
~ sudo usermod -aG docker $USER

We’ll also need to tell our Docker client (inside WSL) how to connect to our Docker daemon (Docker on Windows).

# Persist this to shell config
~ echo "export DOCKER_HOST=tcp://" >> $HOME/.zshrc
~ source ~/.zshrc
# Check that Docker can connect to the daemon (should not get an error)
~ docker images

If you see any errors about not being able to find the Docker host, make sure that Docker for Windows is running, that you’ve allowed legacy connections in settings, and that echo $DOCKER_HOST correctly returns tcp:// in the same shell as you’re running the above commands in.

Now, let’s verify that you can run a container and connect to an exposed port:

~ docker run -d -p 8080:80 openresty/openresty:latest
~ curl localhost:8080
<!DOCTYPE html>
<title>Welcome to OpenResty!</title>


Note: The guide by Nick Janetakis covers more of the details, including getting working mount points up-and-running.

What Else?

It’s worth noting that with Ubuntu 16.04.3 being an LTS release, software versions in the official repositories can be fairly out of date. If you’re relying on later versions of tools, you’ll need to either add their official package repositories (preferred; easier to track updates), install a binary build (good, but rarely self-updating), or build from source (slower, no automatic updates).

As additional tips:

  • Yarn (the JS package manager) provides an official package repository, making it easy to keep it up-to-date.
  • Ubuntu 16.04’s repositories only have Go 1.6 (3 versions behind as of Dec 2017), and thus you’ll need to download the binaries - keeping in mind you’ll need to manually manage updates to newer Go patch releases and major versions yourself.
  • Similarly with Redis, 3.0.6 is available in the official repository. Redis 4.0 included some big changes (inc. the module system, eviction changes, etc) and thus you’ll need to build from source

This is reflective of my experience setting up WSL on Windows 10, and I’ll aim to keep it up-to-date as WSL improves over time—esp. around running later versions of Ubuntu. If you have questions or feedback, ping me on Twitter @elithrar.

Automatically Build Go Binaries via TravisCI & GitHub


GitHub has a great Releases feature that allows you surface—and users to download—tagged releases of your projects.

By default, Releases will provide links to a ZIP and a tarball of the source code for that tag. But for projects with binary releases, manually building and then uploading binaries (perhaps for multiple platforms!) is time-consuming and fragile. Making binary releases available automatically is great for the users of a project too: they can use it without having to deal with toolchains (e.g. installing Go) and environments. Making software usable by non-developer is an important goal for many projects.

We can use TravisCI + GitHub Releases to do all of the work for us with a fairly straightforward configuration, so let’s take a look at how to release Go binaries automatically.


Here’s the full .travis.yml from a small utility library I wrote at my day job. This will:

  • Always build on the latest Go version - “go: 1.x” and sets an env variable. We’ll use this to only build binaries using the latest Go version.
  • Build as far back as 1.5
  • Builds, but doesn’t fail the entire run, on “tip” (e.g. Go’s master branch, which breaks from time-to-time)

It then runs a fairly straightforward build script using Go’s existing tooling: gofmt (style), go vet (correctness), and then any tests with the race detector enabled.

The final step—and the reason why you’re probably reading this post!—is invoking gox to build binaries for Linux, Darwin (macOS) & Windows, and setting the “Rev” variable to the git commit it was built from. The latter is super useful for debugging or supporting users when combined with a –version command-line flag. We also only release on tagged commits via tags: true so that we’re only releasing binaries with intent. Tests are otherwise automatically run on every branch (inc. Pull Requests).

language: go
sudo: false
    - go: 1.x
      env: LATEST=true
    - go: 1.5
    - go: 1.6
    - go: 1.7
    - go: tip
    - go: tip

  - go get

  - # skip

  - go get -t -v ./...
  - diff -u <(echo -n) <(gofmt -d .)
  - go vet $(go list ./... | grep -v /vendor/)
  - go test -v -race ./...
  # Only build binaries from the latest Go release.
  - if [ "${LATEST}" = "true" ]; then gox -os="linux darwin windows" -arch="amd64" -output="logshare.." -ldflags "-X main.Rev=`git rev-parse --short HEAD`" -verbose ./...; fi

  provider: releases
  skip_cleanup: true
    secure: wHqq6Em56Dhkq4GHqdTXfNWB1NU2ixD0/z88Hu31MFXc+Huz5p6np0PUNBOvO9jSFpSzrSGFpsD5lkExAU9rBOI9owSRiEHpR1krIFbMmCboNqNr1uXxzxam9NWLgH8ltL2LNX3hp5teYnNpE4EhIDsGqORR4BrgXfH4eK7mvj/93kDRF2Wxt1slRh9VlxPSQEUxJ1iQNy3lbZ6U2+wouD8TaaJFgzPtueMyyIj2ASQdSlWMRyCVXJPKKgbRd5jLo2XHAWmmDb9mC8u8RS5QlB1klJjGCOl7gNC0KHYknHk6sUVpgIdnmszQBdVMlrZ6yToFDSFI28pj0PDmpb3KFfLauatyQ/bOfDoJFQQWgxyy30du89PawLmqeMoIXUQoA8IWF3nl/YhD+xsLCL1UH3kZdVZStwS/EhMcKqXBPn/AFi1Vbh7m+OMJAVvZp3xnFDe/H8tymczOWy4vDnyfXZQagLMsTouS/SosCFjjeL/Rdz6AEcQRq5bYAiQBhjVwlobNxZSMXWatNSaGz3z78dPEx9qfHnKixmBTacrJd6NlBhWH1kvg1c7TT2zlPxt6XTtsq7Ts/oKNF2iXXhw8HuzZv1idCiWfxobdajZE3EY+8akR060ktT4KEgRmCC/0h6ncPVT0Vaba1XZvbjlraol/p3tswXgGodPsKL87AgM=
  - logshare.darwin.amd64
  - logshare.linux.amd64
    repo: cloudflare/logshare
    tags: true
    condition: $LATEST = true

Note: It’s critical that you follow TravisCI’s documentation on how to securely encrypt your API key—e.g. don’t paste your raw key into this file, ever. TravisCI’s documentation and CLI tool make this straightforward.


Pretty easy, right? If you’re already using Travis CI to test your Go projects, extending your configuration to release binaries on tagged versions is only a few minutes of work.

Further Reading

A Node + TypeScript Starter


TL;DR: If you want a simple template/boilerplate to get you started with TypeScript + Node.js, clone this repo.

As I started spending more time writing JavaScript, the more I missed a stricter type-system to lean on. TypeScript seemed like a great fit, but coming from Go, JavaScript/TypeScript projects require a ton of configuration to get started. Knowing what dependencies you need (typescript, tslint, node-ts), linter configuration (tslint.json), and putting together the right tsconfig.json for a Node app (vs. a browser app) wasn’t well documented. I wanted to compile to ES6, use CommonJS modules (what Node.js consumes), and generate type definitions alongside the .js files so that editors like VS Code (or other TypeScript authors) can benefit.

After doing this for a couple of small projects, I figured I’d had enough, and put together node-typescript-starter, a minimal-but-opinionated configuration for TypeScript + Node.js. It’s easy enough to change things, but it should provide a basis for writing code rather than configuration.

To get started, just clone the repo and write some TypeScript:

git clone
# Then: replace what you need to in package.json, update the LICENSE file
yarn install # or npm install
# Start writing TypeScript!
open src/App.ts

… and then build it:

yarn build
# Outputs:
# yarn build v0.22.0
# $ tsc
# ✨  Done in 2.89s.

And that’s it. PR’s are welcome, keeping in mind the intentionally minimal approach.

From vim to Visual Studio Code


I’ve been using vim for a while now, but the recent noise around Visual Studio Code had me curious, especially given some long-running frustrations with vim. I have a fairly comprehensive vim config, but it often feels fragile. vim-go, YouCompleteMe & ctags sit on top of vim to provide autocompletion: but re-compiling libraries, dealing with RPC issues and keeping it working when you just want to write code can be Not Fun. Adding proper autocompletion for additional languages—like Python and Ruby—is an exercise in patience (and spare time).

VS Code, on the other hand, is pretty polished out of the box: autocompletion is significantly more robust (and tooltips are richer), adding additional languages is extremely straightforward. If you write a lot of JavaScript/Babel or TypeScript, then it has some serious advantages (JS-family support is first-class). And despite the name, Visual Studio Code (“VS Code”) doesn’t bear much resemblance to Visual Studio proper. Instead, it’s most similar to GitHub’s Atom editor: more text editor than full-blown IDE, but with a rich extensions interface that allows you to turn it into an IDE depending on what language(s) you hack on.

Thus, I decided to run with VS Code for a month to see whether I could live with it. I’ve read articles by those switching from editors like Atom, but nothing on a hard-swap from vim. To do this properly, I went all in:

  • I aliased vim to code in my shell (bad habits and all that)
  • Changed my shell’s $EDITOR to code
  • Set git difftool to use code --wait --diff $LOCAL $REMOTE.

Note that I still used vim keybindings via VSCodeVim. I can’t imagine programming another way.


Worth noting before you read on:

  • When I say “vim” I specifically mean neovim: I hard-switched sometime late in 2015 (it was easy) and haven’t looked back.
  • I write a lot of Go, some Python, Bash & and ‘enough’ JavaScript (primarily Vue.js), so my thoughts are going to be colored by the workflows around these languages.
  • vim-go single-handedly gives vim a productivity advantage, but vscode-go isn’t too far behind. The open-godoc-in-a-vim-split (and generally better split usage) of vim-go is probably what wins out

Saying that, the autocompletion in vscode-go is richer and clearer, thanks to VS Code’s better autocompletion as-a-whole, and will get better.



Throughout this period, I realised I had two distinct ways of using an editor, each prioritizing different things:

  • Short, quick edits. Has to launch fast. This is typically anything that uses $EDITOR (git commit/rebase), short scripts, and quickly manipulating data (visual block mode + regex work well for this).
  • Whole projects. Must manage editing/creating multiple files, provide Git integration, debugging across library boundaries, and running tests.

Lots of overlap, but it should be obvious where I care about launch speed vs. file management vs. deeper language support.

Short Edits


  • VS Code’s startup speed isn’t icicle-like (think: early Atom), but it’s still slow, especially from a cold-start. ~5 seconds from code $filename to a text-rendered-and-extensions-loaded usable, which is about twice that of a plugin-laden neovim.
  • Actual in-editor performance is good: command responsiveness, changing tabs, and jumping to declarations never feels slow.
  • If you’ve started in the shell, switching to another application to edit a file or modify a multi-line shell command can feel a little clunky. I’d typically handle this by opening a new tmux split (retaining a prompt where I needed it) and then using vim to edit what I needed.

Despite these things, it’s still capable of these tasks. vim just had a huge head-start, and is most at home in a terminal-based environment.


Whole Projects

VS Code is really good here, and I think whole-project workflows are its strength, but it’s not perfect.

  • The built-in Git support rivals vim-fugitive, moving between splits/buffers is fast, and it’s easy enough to hide. The default side-by-side diffs look good, although you don’t have as many tools to do a 3-way merge (via :bufget, etc) as you do with vim-fugitive.
  • Find-in-files is quick, although I miss some of the power of ag.vim, which hooks into my favorite grep replacement, the-silver-searcher.
  • What I miss from NERDTree is the ability to search it just like a buffer: /filename is incredibly useful on larger projects with more complex directory structures (looking at you, JS frameworks!). You’re also not able to navigate the filesystem up from the directory you opened, although I did see an issue for this and a plan for improvement.

I should note that opening a directory in VS Code triggers a full reload, which can be a little disruptive.

vscode-git-diff-ui vscode-git-commands

Other Things

There’s a bunch of smaller things that, whilst unimportant in the scope of getting things done, are still noticeable:

  • Font rendering. If you’re on macOS (nee OS X), then you’ll notice that VS Code’s font rendering (Chromium’s font rendering) is a little different from your regular terminal or other applications. Not worse; just different.
  • Tab switching: good! Fast, and there’s support for vim commands like :vsp to open files in splits.
  • You can use most of vim’s substitution syntax: :s/before/after/(g) works as expected. gc (for confirmation) doesn’t appear to work.
  • EasyMotion support is included in the VSCodeVim plugin: although I’m a vim-sneak user, EasyMotion is arguably more popular among vim users and serves the same overall goals (navigating short to medium distances quickly). <leader><leader>f<char> (in my config) allows me to easily search forwards by character.
  • The native terminal needs a bunch of work to make me happy. It’s based on xterm.js, which could do with a lot more love if VS Code is going to tie itself to it. It just landed support for hyperlinks (in VS Code 1.9), but solid tmux support is still lacking and makes spending time in the terminal feel like a chore vs. iTerm.
  • Configuration: I aliased keybindings.json and settings.json into my dotfiles repository, so I can effectively sync them across machines like .vimrc. Beyond that: VS Code is highly configurable, and although writing out JSON can be tedious, it does a lot to make changing default settings as easy as possible for you (autocompletion of settings is a nice touch).

You might also be asking: what about connecting to other machines remotely, where you only have an unadorned vim on the remote machine? That wasn’t a problem with vim thanks to the netrw plugin—you would connect to/browse the remote filesystem with your local, customized vim install—but is a shortcoming with VS Code. I wasn’t able to find a robust extension that would let me do this, although (in my case) it’s increasingly rare to SSH into a box given how software is deployed now. Still, worth keeping in mind if vim scp://user@host:/path/to/ is part of your regular workflow.

So, Which Editor?

I like VS Code a lot. Many of the things that frustate me are things that can be fixed, although I suspect improving startup speed will be tough (loading a browser engine underneath and all). Had I tried it 6+ months ago it’s likely I would have stuck with vim. Now, the decision is much harder.

I’m going to stick it out, because for the language I write the most (Go), the excellent autocompletion and toolchain integration beat out the other parts. If you are seeking a modern editor with 1:1 feature parity with vim, then look elsewhere: each editor brings its own things to the table, and you’ll never be pleased if you’re seeking that.

© 2018 Matt Silverlock | His other site | Code snippets are MIT licensed | Built with Jekyll