Tuesday, September 13, 2016

Connecting to wifi without NetworkManager on Ubuntu 16.04 (using wpa_supplicant, dnsmasq, etc)

Suppose NetworkManager in Ubuntu is getting in your way and not letting you configure your wifi how you'd like. You'd like to temporarily turn off NetworkManager and connect to wifi in a lower-level way, without making any permanent changes that will interfere with NetworkManager. Below is the series of commands that worked for me. The approach below plays well with the underlying dnsmasq and wpa_supplicant that NetworkManager uses.

The first step is turning off NetworkManager. This seems to persist across sleeping/waking my laptop.

sudo service NetworkManager stop

Next, get the name of your wireless interface. On my laptop it is wlp3s0, but it might be some other 'w' name like wlan0. All the code snippets below use wlp3s0. Change appropriately if your interface name is different. To find the interface name, you could do "ip link" and look for an entry that begins with "w", or more programatically:

$ iw dev | awk '/Interface/{print $2;}'
wlp3s0

Next, take down the interface, make changes, and bring it back up.

sudo ip link set dev wlp3s0 down
sudo ip link set dev wlp3s0 blah blah blah  # make any changes you want here
sudo ip link set dev wlp3s0 up

Now for some magic. wpa_supplicant is a process that knows how to connect/associate with an access point using wpa security (unlike iwconfig, which apparently only knows about the weaker WEP). It looks like, when you turn off NetworkManager, it tells wpa_supplicant to forget about wireless interfaces. This means that command-line tools like wpa_cli, that talk with wpa_supplicant, won't work. So the way to tell wpa_supplicant about the wireless interface again is:

$ sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path /fi/w1/wpa_supplicant1 --method fi.w1.wpa_supplicant1.CreateInterface "{'Ifname':<'wlp3s0'>,'Driver':<'nl80211,wext'>}"
(objectpath '/fi/w1/wpa_supplicant1/Interfaces/25',)
Note down the path it returns (/fi/w1/wpa_supplicant1/Interfaces/25 in this example). You'll use it when restoring NetworkManager, below.

Now wpa_cli should start working. It looks like NetworkManager configures wpa_supplicant to listen on a control socket different from the default place wpa_cli looks, so you'll need an extra arg to wpa_cli, "-p /run/wpa_supplicant", as below.

wpa_supplicant and thus wpa_cli have the notion of a 'network', which is an access point you'd like wpa_supplicant to try to connect to. Taking down NetworkManager should have removed any networks, but to be sure, try:

sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 list_networks

At this point, you can scan for wifi networks you'd like to connect to (I fuzzed the output below to not reveal actual BSSIDs or SSIDs):

$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 scan
OK
$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 scan_results
bssid / frequency / signal level / flags / ssid
20:e5:2a:2b:34:d1 2442 -58 [WPA2-PSK-CCMP][WPS][ESS] Josh's Network
22:86:8c:e1:33:70 2462 -78 [ESS] xfinitywifi

Create a 'network' to associate with. This prints an integer. It should be I think 0, since NetworkManager removed networks when we stopped it. That '0' appears in the wpa_cli commands, below. Change if add_network returns something other than 0.

$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 add_network
0

Pick an ssid and set the wpa password

$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 set_network 0 ssid "\"Josh's Network\""
OK
$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 set_network 0 psk '"a passphrase"'
OK
# Alternatively, to connect without any passphrase, you can say
# sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 set_network 0 key_mgmt NONE

Enabling the network should cause wpa_supplicant to connect

$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 enable_network 0
OK

To check status of connection (output below a bit fuzzed to not reveal actual address info):

$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 status
bssid=20:e5:2a:2b:34:d1
freq=2442
ssid=Josh's Network
id=0
mode=station
pairwise_cipher=CCMP
group_cipher=CCMP
key_mgmt=WPA2-PSK
wpa_state=COMPLETED
address=00:19:d1:d4:d9:22
uuid=61294555-153f-568c-9ed7-36af41fff2e0

Once you're connected, which I think is when the "wpa_state=COMPLETED" shows up in status, get an IP

sudo dhclient -v wlp3s0

Next set up DNS. NetworkManager is set up to use dnsmasq. To communicate with dnsmasq, do something like this (to set it to use opendns servers):

$ sudo dbus-send --system --print-reply --dest=org.freedesktop.NetworkManager.dnsmasq /uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers array:string:"208.67.222.222","208.67.220.220"
method return time=1473770256.196485 sender=:1.165 -> destination=:1.197 serial=11 reply_serial=2
That's it!
$ ping google.com
PING google.com (209.85.232.139) 56(84) bytes of data.
64 bytes from qt-in-f139.1e100.net (209.85.232.139): icmp_seq=1 ttl=41 time=32.0 ms
64 bytes from qt-in-f139.1e100.net (209.85.232.139): icmp_seq=2 ttl=41 time=32.3 ms

To then to tear things down when done and restart NetworkManager:

$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 disable_network 0
OK
$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 remove_network 0
OK
sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path /fi/w1/wpa_supplicant1 --method fi.w1.wpa_supplicant1.RemoveInterface "'/fi/w1/wpa_supplicant1/Interfaces/13'"
()
$ sudo service network-manager start

Below is a script putting this all together, for your reference. I didn't focus much on robustness, so it's a little fragile:

#!/bin/bash

set -o pipefail # Make it so return code from pipe is last one to fail

# https://w1.fi/wpa_supplicant/devel/dbus.html - super handy API reference for talking with wpa_supplicant via gdbus

setup() {
    wif=$(iw dev | awk '/Interface/{print $2;}')
    expectNoNetworks=0
    if (sudo service network-manager status | egrep '^\s+Active: active ' > /dev/null); then
 echo "Stopping NetworkManager"
 sudo service network-manager stop || exit
 sleep 1 # Give /run/wpa_supplicant time to disappear
 if [ -e /run/wpa_supplicant ]; then
     echo "error: /run/wpa_supplicant exists"
     exit 1
 fi
 
 sudo ip link set dev $wif down || exit
        #
 #  Make any changes to the device in here.....
        #
        #
 sudo ip link set dev $wif up || exit
 expectNoNetworks=1
    fi

    echo "Checking wpa_supplicant interface for $wif"
    if ! pre="$(sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path /fi/w1/wpa_supplicant1 --method fi.w1.wpa_supplicant1.GetInterface "'$wif'" 2> /dev/null )"; then
 echo "  Creating interface"
 pre="$(sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path /fi/w1/wpa_supplicant1 --method fi.w1.wpa_supplicant1.CreateInterface "{'Ifname':<'$wif'>,'Driver':<'nl80211,wext'>}")" || exit
    fi
    # Converts something like (objectpath '/fi/w1/wpa_supplicant1/Interfaces/25',)   ==>    /fi/w1/wpa_supplicant1/Interfaces/25
    ifPath=$(echo $pre | sed -e "s/^.*'\(.*\)'.*$/\1/")
    
    if [ ! -e /run/wpa_supplicant ]; then
 echo "error: /run/wpa_supplicant does not exist"
 exit 1
    fi

    if ! netInt=$(sudo wpa_cli -p /run/wpa_supplicant -i $wif list_networks | awk '/^[0-9]+\s/ { print $1;}'); then
 echo "List networks failed"
 exit 1
    fi
    if [ "$netInt" != "0" ] || [ $expectNoNetworks -eq 1 ]; then
 if [ "$netInt" ]; then
     echo "Removing any networks"
     sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path $ifPath --method fi.w1.wpa_supplicant1.Interface.RemoveAllNetworks || exit
 fi
     # We don't expect NetworkManager to have left around any networks, but get rid of them in any case
 netInt="$(sudo wpa_cli -p /run/wpa_supplicant -i $wif add_network)" || exit
    fi
    echo "netInt is $netInt, ifPath is $ifPath"
}
command="start"
if [ "$1" ]; then
    command="$1"
fi
case $command in
    setup)
 setup
 exit 0
 ;;
    list)
 setup
 echo "Doing a AP scan"
 sudo wpa_cli -p /run/wpa_supplicant -i $wif status
 sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state'
 sudo wpa_cli -p /run/wpa_supplicant -i $wif scan > /dev/null || exit
 while sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state=SCANNING' > /dev/null; do
     sleep 0.1
 done
 # Reformat scan results to be a bit prettier.
 sudo wpa_cli -p /run/wpa_supplicant -i $wif scan_results | grep -v 'bssid / frequency / signal level / flags / ssid' | sort -nr -k 3 | awk 'BEGIN { FS="\t"; printf "Signal\n"; printf "%6s %-30s %-50s %s %s\n", "Level", "SSID", "Flags", "Frequency", "BSSID"; } { printf "%6s %-30s %-50s %-9s %s\n", $3, $5, $4, $2, $1; }'

 read -p "Enter a SSID to connect to: " aSsid
 if [ -z "$aSsid" ]; then
     echo "Can not have empty ssid, i think"
     exit 1
 fi
 #sudo wpa_cli -p /run/wpa_supplicant -i $wif status
 sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state'
 if ! sudo wpa_cli -p /run/wpa_supplicant -i $wif status | egrep 'wpa_state=(INACTIVE|DISCONNECTED)' > /dev/null; then
     echo "Disconnecting current connection"
     sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path $ifPath --method fi.w1.wpa_supplicant1.Interface.Disconnect > /dev/null || exit
 fi
 echo "Disabling network"
 sudo wpa_cli -p /run/wpa_supplicant -i $wif disable_network 0  > /dev/null || exit
 sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state'
 echo "Setting ssid"
 sudo wpa_cli -p /run/wpa_supplicant -i $wif set_network 0 ssid "\"$aSsid\""  > /dev/null || exit
 read -sp "Enter WPA password, or leave blank to try connecting with no password (chars will not echo): " aPassword
 if [ -z "$aPassword" ]; then
     echo "using no passowrd"
     sudo wpa_cli -p /run/wpa_supplicant -i $wif set_network 0 key_mgmt NONE > /dev/null || exit
 else
     sudo wpa_cli -p /run/wpa_supplicant -i $wif set_network 0 psk "\"$aPassword\""  > /dev/null || exit
 fi
 echo "Enabling network"
 sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state'
 sudo wpa_cli -p /run/wpa_supplicant -i $wif enable_network 0  > /dev/null || exit
 if sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state=DISCONNECTED' > /dev/null ; then
     echo "Selecting network $netInt"
     sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path $ifPath --method fi.w1.wpa_supplicant1.Interface.SelectNetwork "$ifPath/Networks/$netInt" > /dev/null || exit
 fi

 echo "Waiting for wpa_supplicant to get into COMPLETED state"
 while ! sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state=COMPLETED' > /dev/null ; do
     sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state'
     sleep 0.5
 done
 
 echo "Now getting IP"
 sudo dhclient -v $wif || exit

 echo "Setting openDNS"
 sudo dbus-send --system --print-reply --dest=org.freedesktop.NetworkManager.dnsmasq /uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers array:string:"208.67.222.222","208.67.220.220" > /dev/null || exit
 echo "Done!"
 exit 0
 ;;
    done)
 if (sudo service network-manager status | egrep '^\s+Active: active ' > /dev/null); then
     echo "Looks like NetworkManager is active, so I think we're done"
     exit 0
 fi
 setup
 echo "Disabling Network"
 sudo wpa_cli -p /run/wpa_supplicant -i $wif disable_network 0  > /dev/null || exit
 echo "Removing all networks"
 sudo wpa_cli -p /run/wpa_supplicant -i $wif remove_network $netInt || exit
 #sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path $ifPath --method fi.w1.wpa_supplicant1.Interface.RemoveAllNetworks || exit
 sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state'
 echo "Removing interface"
 sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path /fi/w1/wpa_supplicant1 --method fi.w1.wpa_supplicant1.RemoveInterface "'$ifPath'" || exit
 sleep 1
 echo "Restarting NetworkManager"
 sudo service network-manager start || exit
 echo "Done"
 exit 0
 ;;
    *)
 echo "Command arg is (setup | list | done)"
 exit 1
    ;;
    
esac

Monday, June 27, 2016

Running a tiny node.js webserver hosting multiple domains with SSL on google compute engine

EDIT: 3/19/2017 - improved server security

I recently migrated a few low-traffic domains from being hosted on namecheap shared hosting to running on a private node.js server on google compute engine. This was partly because I wanted to run a node.js app on one of the domains, and partly because I was tired of the poor quality of namecheap hosting. I wanted to share what ended up working with anyone else who's trying to do something similar.

I picked Google Compute Engine for hosting partially because an app I'm developing uses google cloud datastore and partly because they had a small machine type that looked like it might work. Their pricing lists a machine type, f1-micro, with 0.6 GB which costs $.0056/hour, or around $4/month.

Let's jump in. The webserver we create will be serving pages for multiple domains. Later we'll get the IP address for the server and point multiple domains at it. Let's assume we have four domains: example1.com, www.example1.com, example2.com, www.example2.com. Let's assume our server is server.js and we run it by node server.js. Note: I've simplified the code in this post a bit from what I actually use, to remove parts that aren't relevant to this post. There's a chance there are bugs in it. Please let me know if you find anything amiss.

Let's say our directory structure for our app is:

server.js
package.json
static-example1/
static-example2/
where the static directories are where we'll serve static files for our domains.
  1. server.js config. I use the express framework. We'll set it up so that HTTP redirects to HTTPS and also www.example* redirects to example*. SSL/acme/letsencrypt explained below.
    var express = require("express");
    var http = require('http');
    var https = require('https');
    var app = express();
    
    // letsencrypt verification requests will be HTTP.  Let them proceed without any of the redirection/https checking below.
    app.use('/.well-known/acme-challenge/', express.static('static-acme-challenge/.well-known/acme-challenge'));
    app.use('/.well-known/acme-challenge/',function(req,res, next) {
        res.status(404).send('letsencrypt challenge file missing');
    });
    
    app.use(function redirects(req, res, next) {
        var host = req.headers.host;
        if ((host == 'example1.com' || host == 'example2.com') && req.secure) {
            // good to go
     next();
        } else if (host == 'www.example1.com') {
            // redirect both to HTTPS as well as get rid of the www subdomain.
     res.redirect('https://example1.com' + req.url);
        } else if (host == 'www.example2.com') {
            // redirect both to HTTPS as well as get rid of the www subdomain.
     res.redirect('https://example2.com' + req.url);
        } else if (!req.secure) {
     res.redirect('https://' + req.headers.host + req.url);
        } else {
     should never get here....
        }
    });
    
    var vhost = require('vhost');
    // You could change this up so that instead of serving static files you do more interesting routing.  Beyond scope of this blog..
    app.use(vhost('example1.com', express.static('static-example1')));
    app.use(vhost('example1.com', function(req, res, next) {
        res.status(404).send('no static file found for example1.com');
    }));
    app.use(vhost('example2.com', express.static('static-example2')));
    app.use(vhost('example2.com', function(req, res, next) {
        res.status(404).send('no static file found for example2.com');
    }));
    app.use("*",function(req,res) {
       this should never happen - all requests should have been caught by one of the clauses above.
    });
    
    var httpServer = http.createServer(app);
    var httpsServer = null;
    if (fs.existsSync("./le-config/live/example1.com/privkey.pem")) {
        httpsServer = https.createServer({
            key: fs.readFileSync("./le-config/live/example1.com/privkey.pem"),
            cert: fs.readFileSync("./le-config/live/example1.com/fullchain.pem"),
            ca: fs.readFileSync("./le-config/live/example1.com/chain.pem")
        }, app);
    } else {
      console.log('No SSL certs found.  Assuming we are bootstrapping with no https');
    }
    httpServer.listen(80);
    httpsServer.listen(443);
    
  2. Get code somewhere that google compute engine can find it. We put it in a git repository, with server.js in the root dir of the repository. Then it's easy to use Google cloud repositories to get it to the VM. According to this, a cloud repository is created for your project. To set up for pushing to it:
    git config credential.helper gcloud.sh
    git remote add cloud https://source.developers.google.com/p/your-project-id/
    
    Substitute "your-project-id" with your google project id (probably is something like myproj-65432. Then you can push with
    git push google master
  3. Now we need to create a startup script that the VM instance will run when it first starts up. Here's the simplified version of the script I use. I put it in the file startup-script.sh, in the same directory as server.js, though it doesn't need to be.
    #! /bin/bash
    # [START startup]
    set -v
    
    # Talk to the metadata server to get the project id
    PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
    USERNAME=something # Replace with your username you use to log in to the server
    
    # Set up a 512MB swap partition.  The 600 MB of ram the VM has is not quite enough to do the 'npm install' command below.
    fallocate -l 512m /mnt/512MiB.swap
    chmod 600 /mnt/512MiB.swap
    mkswap /mnt/512MiB.swap
    swapon /mnt/512MiB.swap
    
    # [START the rest]
    # Debian has old version of node.  Get fresh one.  This does an apt-get update
    curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
    
    # Some necessary packages
    apt-get install -yq ca-certificates git nodejs build-essential supervisor libcap2-bin authbind unattended-upgrades
    apt-get install certbot -t jessie-backports
    
    # Create a nodeapp user. Node will run as this user.  This account does not have privileges to run a shell.
    useradd -M -d /usr/sbin/nologin nodeapp
    
    # Users seem to be added in default gropus that include a group with sudo privileges.  Remove nodeapp from those groups
    for agr in `groups nodeapp | cut -f 2 -d ':'`; do
        if [ $agr != "nodeapp" ]; then
     echo "Removing nodeapp from group $agr"
     gpasswd -d nodeapp $agr
        fi;
    done
    
    # Set default group to nodeapp
    /usr/sbin/usermod -g nodeapp $USERNAME
    # Set default permissions for new files so only nodeapp group can read them
    grep -q '^\s*umask ' /home/$USERNAME/.profile && sed -i 's/^\s*umask .*/umask 0026/' /home/$USERNAME/.profile || echo 'umask 0026' >> /home/$USERNAME/.profile
    
    # /opt/app will hold the git repo containing server.js
    mkdir /opt/app
    chown nodeapp:nodeapp /opt/app
    cd /opt/app
    
    # For authbind - let nodeapp bind to only ports 80 and 443
    touch /etc/authbind/byport/80
    touch /etc/authbind/byport/443
    chown nodeapp /etc/authbind/byport/80
    chown nodeapp /etc/authbind/byport/443
    chmod 755 /etc/authbind/byport/80
    chmod 755 /etc/authbind/byport/443
    
    
    # Create the script that cron will run every 2 weeks to renew our SSL certs
    # Note it restarts the server every 2 weeks as part of this.
    cat >/tmp/renew-cert.sh << EOF
    #! /bin/bash
    cd /opt/app
    echo "`whoami`:`pwd`: Checking renewal on `date`:"
    certbot renew --non-interactive --email subadmin@example1.com --agree-tos --debug --test-cert --config-dir ./le-config --work-dir ./le-work --logs-dir ./le-logs --webroot --webroot-path ./static-acme-challenge
    sudo supervisorctl restart nodeapp
    EOF
    chown $USERNAME:nodeapp /tmp/renew-cert.sh
    
    # Run a bunch more setup not as root
    su - $USERNAME << EOF
    # Get the application source code from the Google Cloud Repository.
    git config --global credential.helper gcloud.sh
    git clone https://source.developers.google.com/p/$PROJECTID /opt/app
    
    # Create directory for the letsencrypt challenges to be served by server.js
    mkdir static-acme-challenge
    
    # Install app dependencies specified in package.json
    # --production means to skip dev dependencies
    npm install --production
    
    # setup the cron script
    mv /tmp/renew-cert.sh .
    chmod u+x renew-cert.sh
    (crontab -l && echo "01 02 2,16 * * /opt/app/renew-cert.sh >> /opt/app/le-logs/cron-renews 2>&1") | crontab -
    EOF
    
    # Now that the npm installation has finished, we no longer need swap, so get rid of it.
    swapoff -a
    rm /mnt/512MiB.swap
    
    # Configure supervisor to run the node app.  Limit the amount of RAM node uses via the flags specified.
    cat >/etc/supervisor/conf.d/node-app.conf << EOF
    [program:nodeapp]
    directory=/opt/app
    command=authbind node --trace_gc --max_old_space_size=256 --max_semi_space_size=16 --max_executable_size=256 server.js
    autostart=true
    autorestart=true
    user=nodeapp
    environment=USER="nodeapp",NODE_ENV="production",ONGOOG="yes"
    stdout_logfile=syslog
    stderr_logfile=syslog
    EOF
    # Start the server.js.  Note it does not have SSL certs yet, it is running just so certbot can do the letsencrypt challenges
    supervisorctl reread
    supervisorctl update
    # Give server a chance to start up
    sleep 5
    
    # Make unattended apt upgrades reboot when they happen
    egrep -q "^Unattended-Upgrade::Automatic-Reboot" /etc/apt/apt.conf.d/50unattended-upgrades || echo 'Unattended-Upgrade::Automatic-Reboot "true";' >> /etc/apt/apt.conf.d/50unattended-upgrades
    egrep -q "^Unattended-Upgrade::Automatic-Reboot-Time" /etc/apt/apt.conf.d/50unattended-upgrades || echo 'Unattended-Upgrade::Automatic-Reboot-Time "02:15";' >> /etc/apt/apt.conf.d/50unattended-upgrades
    
    # Generate the SSL certificates for the domains we wish to serve.
    domains=('example1.com', 'www.example1.com',
             'example2.com', 'www.example2.com')
    # Generated a string containing each domain prefixed with a '-d'
    extraargs=""; for i in "${domains[@]}"; do extraargs="$extraargs -d $i"; done
    #debugflags="--debug --test-cert"
    debugflags=""
    # Generate the SSL certs.  Run as nodeapp
    su - $USERNAME -c "certbot certonly --non-interactive --email subadmin@example1.com --agree-tos --config-dir ./le-config --work-dir ./le-work --logs-dir ./le-logs --webroot --webroot-path ./static-acme-challenge $debugflags $extraargs"
    
    # Restart server to pick up the SSL certs
    supervisorctl restart
    
    # Application should now be running under supervisor
    # [END startup]
    

    A few notes on the script. First, the main trick in getting node to run on the f1-micro instance is managing memory. We do this by temporarily adding a swap partition to absorb the extra memory the npm install uses, and by passing flags to node (they actually are v8 flags) to limit the resident memory usage of the node binary. Node/v8 does not have great tools for managing memory usage. In particular, I saw no way to limit the virtual memory size. I tried ulimit and node would not start, failing when allocating memory.

    Regarding SSL. I went with letsencrypt since it's free. I wanted really badly to be able to use the letsencrypt-express npm package. It promised so much convenience. I banged my head against it for a few days and could not get it to work. I dug into the source code, tried fixing bugs, and eventually gave up. The source didn't seem very maintainable and did not inspire confidence. I switched to the letsencrypt natively supported certbot, specifically with their other debian-8 mode. Worked like a charm.

    letsencrypt requires that you be able to prove you control the domains you want certs for. The way I use to do this is by letting certbot put files on the server that the letsencrypt CA can then find. I put the challenges in the static-acme-challenge directory.

  4. Now let's set up the server. First, setup the gcloud command-line tool. This will involve creating a project and other steps detailed in the Getting Started Guide they refer to there. In this blog, let's assume your project is called myproj. Terminology: Google calls a VM running on a machine an "instance". We'll be creating an instance that will run our node server.
  5. Set the default region and zone. I picked a zone by running gcloud compute machine-types list and choosing the region listed for the machine type f1-micro (us-east1-b when I did it). Not sure if that's necessary.
  6. Reserve an static external IP address. Without this, the instance will not necessary show up at the same IP each time it's (re)started. Note you are charged ($0.01/hr) if the IP address is not used by a running instance, so don't leave it lying around if you don't need it.
    gcloud compute addresses create myproj-static-ip

    You may now set any A records to point that address. This is necessary partly so the letsencrypt can verify you own the domains when generating the cert inside of startup-script.sh. That means downtime on your websites. You can avoid that by using another mechanism to prove to letsencrypt you control the domains (e.g., adding TXT records to DNS, or acting on the letsencrypt challenges using your old server), and modifying startup-script.sh, but that's outside the scope of this tutorial.

  7. Create the VM instance. There are lots of options when running the create command. I create a VM based on debian because it's the default and thus I figured the least likely to have issues. I include "datastore" below because I need to access the datastore, but you can omit it. This command
    gcloud compute instances create myproj-instance --machine-type=f1-micro --image-family=debian-8 --image-project=debian-cloud --scopes userinfo-email,datastore,cloud-platform --zone us-east1-b --metadata-from-file startup-script=startup-script.sh --tags https-server --address myproj-static-ip
  8. Now you can do things like
    # Get the console output from the running VM
    gcloud compute instances get-serial-port-output myprof-instance
    
    # ssh into the VM.  You can omit the 'nodeapp@' bit.
    gcloud compute ssh nodeapp@myproj-instance
    
  9. Tell google to expose your new server to the internet, but opening up the ports in the firewall
    gcloud compute firewall-rules create http-and-https --allow tcp:80,tcp:443 --description "Allow http and 
    https access to http-server" --target-tags https-server
    
  10. If you want to rip the thing down, you do:
    gcloud compute firewall-rules delete http-and-https
    gcloud compute instances delete myproj-instance
    gcloud compute addresses delete myproj-static-ip
    

Enjoy and hope it's helpful.