How We Deploy Containers at Grammarly

As discussed in the first part of this series, we were very excited when we figured out how to properly build Docker images, until we realized that we had no idea how to run them in production. You might have already guessed that we were pondering building our own tool

Continue reading »

Making Docker Rock at Grammarly

Today, the industry is saturated with discussions about containers. Many companies are looking for ways they can benefit from running an immutable infrastructure or simply boost development performance by making repeatable builds between environments simpler. However, sometimes by simplifying the user experience we end up complicating the implementation. On our journey to a usable, containerized infrastructure, we faced a number of daunting challenges, the solutions to which are the subject of this post. Welcome to the bleeding edge!

Continue reading »

Petabyte-Scale Text Processing with Spark

At Grammarly, we have long used Amazon EMR with Hadoop and Pig in support of our big data processing needs. However, we were really excited about the improvements that the maturing Apache Spark offers over Hadoop and Pig, and so set about getting Spark to work with our petabyte text data set. In this post, we describe the challenges we had in the process and a scalable working setup of Spark that we have discovered as a result.

Continue reading »

Running Lisp in Production

At Grammarly, the foundation of our business, our core grammar engine, is written in Common Lisp. It currently processes more than a thousand sentences per second, is horizontally scalable, and has reliably served in production for almost 3 years.

We noticed that there are very few, if any, accounts of how to deploy Lisp software to modern cloud infrastructure, so we thought that it would be a good idea to share our experience. The Lisp runtime and programming environment provides several unique, albeit obscure, capabilities to support production systems (for the impatient, they are described in the final chapter).

Continue reading »