Dev Stack

7 min readOct 24, 2018


by Roman Bilusyak, Alex Sukholeyster

We are building an ingenious engine Ingeenee to solve NP-complete problem in real-time SLA. Obviously existing methods and technologies cannot be used, because P vs. NP is still one of the Millennium Problems. Hence we are building a brand new AI tech, capable to deliver a solution to NP problem in predictable time. The tech is being built from scratch, for Travel industry. This post is about our development stack. It could be useful to future hires, to get prepared, to join us in 2019.

The tools and environments have to be fast, reliable, easy to setup, scale massively, monitor and maintain.

Programming Languages

Python was used from the very beginning. It was good for conceptualizing. It was chosen mainly because of all the available libraries starting from NumPy, SciPy, to various client bindings (Redis, PostgreSQL). Also geo engine requires some specific geo libraries (such as Shapely or H3). Python was used everywhere until we faced performance issues. It was a time we started to rewrite our travel intelligence in Go.

Go was selected because it’s not so low-level as C & Rust, and much cleaner than C++. Go compiles to x86/AMD64, ARMv8 and ppc64 architectures. It’s driven and maintained by Google. We always use latest (current 1.11) versions to gain maximum performance. Though Go did not receive much performance-oriented optimization during 2018.

Having mixed (Python/Go/Bash…) modules introduces additional complications into the integration process (how modules communicate with each other). We follow Unix philosophy here:

Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.

The Art of Unix Programming

So in our case, each module is a standalone program and text streams are JSON formatted data, which is easy to parse by program, and easy to read by a human. Easy to compress when needed. Easy to switch to binary protobuf.


We all write code and we all use Sublime Text as the primary text editor. Yes, we don’t use any IDE for our development at all. The main reason is that each IDE supports one language but we create our tech using Python, Go, Bash, R, JavaScript, HTML/CSS. Also, main features most people are looking for in IDE are syntax highlighting and code formatting. Both features work in Sublime. Yes, there is one more feature like code suggestion but it also works in Sublime using 3d party plugins. We don’t need suggestions, because we design it to not be suggestions dependent. From another perspective, Sublime provides some advanced text editing features that most IDEs don’t have (like multi editing, sorting, lower-upper case conversions etc).

Vim is used for quick hacking, especially on remote nodes.

Operating System

We use Ubuntu. It is free, fast, secure and simple. We use Ubuntu Desktop on our development computers and Ubuntu Server on almost all our servers. Exceptions are just some specific servers like firewalls, NAS and VM host servers. It’s very convenient to use the same OS on development and production environments: it simplifies the development and debugging process a lot.

Ubuntu provides a highly secure environment on both desktop and server installs; usability and maintainability are also very important. Usually, Ubuntu install goes smoothly but sometimes it may cause issues like that tiny case with USB drive or crashing Ubuntu 18.04 Installer on AMD Ryzen 7 2700X CPU or random soft lockup because of kernel bug, also with Ryzen 7.


First, we had a need to put data close to computation nodes. As described in the global scale post, we consider multiple options for bare-metal servers for the computational engine. We tried ARMv8 ThunderX on bare metal at Packet cloud. It was cheap enough, but not productive enough, despite very impressive htop for that time.

Nodes with ThunderX chips

Despite big number of cores such hardware appeared to be slow because of differences between ARM and x86/AMD64 architectures. We looked elsewhere, to chips used in supercomputers — Intel Xeon Phi. We were able to test XeonPhi. 256 vcores showed great performance. Unfortunately, it was very expensive back then, and as of today, it has been already discontinued.

Node with Xeon Phi chip

We also considered cloud (primarily AWS), but the cost for our needs was draconian. When computing on AWS, we also need to store massive data amounts on EC2 machines and S3, which significantly adds in cost if we want to download that data from AWS to the local environment for analysis, storage and backup.

Our approach for computation-network-storage relations is the following. All data required by computing nodes (servers or docker images) is available on the nodes, so they could even be put offline without impact onto results. Periodically, statistics job is run to gather data from multiple nodes, and visualize it using Python and R. Finally, reduce scripts gather all data from servers to NAS, and do data consolidation and post-processing. Such approach requires reduced and only periodical network load, but has some overhead in storage. This is OK, storage is cheapest (and getting cheaper with the fastest rate compared to computing and network).


We needed a reliable network storage server (NAS) as soon as we started to build own data. The first NAS server we got was OEM 40TB solution. One storage server was not enough. We assembled a few more custom 100TB servers. Decision on what OS to run there was actually more difficult than assembling the hardware. We considered FreeNAS with OpenZFS, but features it provides were not so important for us, comparing to additional complications with adding one more environment to support and monitor. RAM requirements of FreeNAS were also highly unfortunate. So we decided to run Ubuntu there as NFS storage server.

Data Center

After all, we decided to build own infrastructure using co-located servers. So the final decision was made to use general-purpose servers. Now we have dedicated servers for development experiments, HPC, storage and visualization environment.

For heavy computation (growing of intelligence), we faced data gravity phenomenon. Now all nodes and services came to where the data is. The data center is dark, cold and noisy — exactly what is needed to cool down hot [75-85C] chips. Below is another data center — the most beautiful one in the world MareNostrum — to show you how the servers are organized into the racks, that environment is sealed.

MareNostrum supercomputer in Barcelona


As usual, our development process contained lots of manual routine steps: such as building and deploying changes to lab & prod servers. As soon as something becomes routine and boring — it becomes automatable. We improved that by installing Jenkins server, and implemented fully automated build and deployment. Free and open source Jenkins is very simple to configure and use. Jenkins is written in Java, so we use Java implicitly;)


To run experiments in the isolated environment we used VirtualBox on Ubuntu Server. It was OK but we discovered some disadvantages like difficult management: it supported only command-line interface. Adding phpVirtualBox plugin (web interface) made is a little bit easier to manage virtual machines but it was very unstable.

Performance within VM was questionable. After consulting with a friend at VMware, we figured out the low-level scheduling difference between VirtualBox and ESXi, as hypervisors of different levels. Thus we got a dedicated VMware ESXi server. It has a fancy web UI that allows doing anything you need on that server. Anything except one feature: cloning VM. Cloning VM can be done in two ways: using a dedicated vSphere server that we don’t have (yet) or via ssh using the command-line interface. Also, we still don’t like the longer cloning time of VM on ESXi vs. VirtualBox.


To keep overall infrastructure under control it has to be monitored. We use Nagios to monitor the entire infrastructure. It monitors (checks) each server, switch, and UPS we have. Nagios is a great tool and it does what it supposed to do: to monitor and send notifications. But its old tool resulting in pretty dumb UI. Also, it does not provide any metrics by default. Also, all the Nagios configuration is done through direct config editing, which makes Nagios less comfortable than other tools like open source Zabbix. Maybe we will move to Zabbix one day as it is easier to configure.

While Nagios works we stay with Nagios.


Effective communication is key. We tried many different tools and stayed with Trello and Slack. Trello is task-oriented, for structured content authoring and content reusing. Slack is activity-oriented, for dynamics, discussions. There are also other tools: secure mail, secure chat and old plain phone.


To solve the unsolved problem, we are building a brand new technology. The technology is Artificial Intelligence. We will showcase the solution soon, in November 2018. Stay curious.

Check out how it works at