Friday, September 29, 2017

The optimal shape of a kitchen measuring cup

Four days ago I launched Euclid, more accurate measuring cup on Kickstarter, and it just crossed 100% of it’s fund-raising goal, which is amazing. The future has changed from *if* this idea ever becomes a real thing to “It’s going to be real!”

Euclid’s shape is computationally generated, based on math resulting from a neat geometry insight. I quit my software engineering job four years ago to create it, and though it emerged from custom software, the cup itself is not digital in any way. No bluetooth, app or touchscreen. The Kickstarter page has a summary of the idea. Here I wanted to add a bit more detail about the project and how it came about.

My background is as a infrastructure software engineer. I was at Google and Facebook for 10 years building a number of distributed systems including GFS, Sibyl and Configerator.

I also love math and tinkering with things. Growing up, I spent a lot of (mostly voluntary) time in the basement, screwing around doing dumb stuff. What ever anyone says, it is a terrible idea to pull the spark plug wire off a running lawn mower with your bare hands.

One day I was baking in the kitchen and had a 2-cup measuring cup out and the recipe called for 1/4-cup. I felt like I should switch to a smaller measuring cup and for whatever reason stopped to wonder why. Why do smaller measuring cups seem better at measuring small amounts than larger measuring cups?

I realized it was about accuracy and that clearly defining measurement error would help in reasoning this through. Brace yourself, I’m going to get a bit technical, because understanding the problem was the key to solving it.

The problem

Let’s consider measurement error as the vertical distance between the target measurement line and the true height of the liquid. For example, when measuring 1-cup, you may think you hit the target line, but actually you overshot by 1mm because the measuring cup was not quite at eye level. Other possible reasons for missing the line include liquid sloshing, hand shaking, and the coriolis force (ok I’m kidding about that last one… I think).

The graphic below shows an example of overshooting by 1mm:



The amount of extra liquid in the cup equals 1mm times the surface area of the liquid. Divide that by the target volume, and you have your fractional error (e.g., 5%).

If we assume that over-shooting (or under-shooting) happens just as easily at the top of a measuring cup as at the bottom, then that means the extra liquid height (1mm in the graphic) will be roughly the same at the top and the bottom.

Looking at the equation in the graphic, this means that measurement error depends primarily on the ratio of surface area to volume at the target line. Let’s call that the S/V ratio.

Here are a few interesting implications that may correspond to your intuition:

  • Narrow measuring cups are more accurate than wide measuring cups.
  • Small-capacity measuring cups are better at measuring small volumes because they are narrower. This answers the conundrum that started this whole quest.
  • Cylindrical measuring cups measure smaller amounts less accurately than larger amounts. Why? Because the S/V ratio is larger at smaller amounts. This is why it’s hard to measure ¼-cup accurately in a 2-cup measuring cup. It’s also why larger measuring cups don’t have lines for ⅛-cup or 1-tablespoon or 1-teaspoon.
  • Many measuring cups have sloped sides or are conical. All of these shapes have the same problem. The S/V ratio increases for smaller amounts, though not as quickly as in a cylindrical cup.
This animation shows the relationship between shape and error:


A natural question is, if narrow means more accurate, then why not use test tubes for measuring everything? Well, a test tube that holds 2-cups would be 10 feet high. I’m not sure about your cupboards, but 10-feet won’t fit in my cupboards, even if I rearrange things. But even if you bit the bullet and remodeled the kitchen to include a 10-foot high cupboard, the test tube would still be hard to use. A large fraction of whatever you’re measuring might end up stuck to the sides of the tube because there’s so much tube. So practical measuring cup design trades off accuracy for convenience.

Designing for the S/V ratio

We just observed that the ratio of surface area to volume (S/V ratio) is important to measuring accuracy, and that measuring cups today are less accurate measuring smaller amounts because the S/V ratio is larger for smaller amounts.

That suggests that a solution would be a measuring cup shaped so that the S/V ratio is the same at every line marking. That would make it just as accurate measuring small amounts as large amounts. More precisely, it would be optimal in the sense that it minimizes the variance of error across all measurement amounts, for a chef who can get liquid to within k millimeters of each measurement line, for some constant k that varies with each chef's ability / effort.

Figuring out what kind of shape would have this property involves math. I started with some simplifying assumptions, like assuming the measuring cup is circular. It still was tricky and took me a number of months working on weekends. The math is not as complicated as it might sound. If I were honest, I’d say I didn’t know what I was doing initially and it took me a while to think about the problem the right way. But I’m not sure I’m ready to admit that :).

Below is an animation of how the surface area and volume changes.  The Kickstarter page has a more detailed description of how the solution works. Below, I’ll talk about the process a bit.


The first design

There are many potential designs that have a constant S/V ratio. As I was developing the equations, I started playing with visualizations to understand the design space, first 2D via javascript and SVG, and then 3D using Blender, an open-source modelling program. Blender was cool partly because it has a python API, which made it possible to programmatically generate curves and surfaces that have the right geometry. The first rendering I have from this time is:



Bye-bye job

Around this point I left Facebook to work on this full-time. I knew I wanted to try starting some kind of company, I loved the measuring cup idea, and I also loved that it was far afield from what software engineers normally start companies around. Part of me liked the quizzical, surprised and somewhat distributed expression people got on their faces when I expressed my intentions.

I thought of the measuring cup somewhat as a passion project and break from SWE, so I jumped into it with no business plan or customer research. Around this time I also incorporated an S-corp, filed patent applications and so forth.

Designing how to design

I started having industrial designer contractors help design the measuring cup and it quickly became apparent that there was a huge gap. Neither they nor their design software could couple design with the surface area & volume (S/V) geometry constraints in the way we needed.

There were some expensive process failures here. They would design shapes that I didn’t know how to modify to obey the S/V constraints. The scheme I finally came up with was to separate design into three steps. First, the industrial designers created a cross-sectional closed curve of the measuring cup, such as:


I ran custom Python that replicated the curve as a sequence of ribs, positioning and scaling each rib according to the S/V constraints. For example:

----->

While 3D design software is terrible for designing with mathematical constraints, it is excellent for calculating surface area and volume of things. So once I had generated a shape, it was possible to write code to double check that the surface area and volume at every height was indeed what it was supposed to be.

Prototyping

I went through 25+ physical prototypes and countless 3D models. The bread & butter prototypes were 3D-printed in white plastic, sometimes via someone’s Makerbot, via 3D Hubs, a 3D printing marketplace. A much small number of high quality prototypes, including the one in the video, were milled from a block of PMMA. I wasn’t able to figure out a way to make transparent, food-safe prototypes economically.

The original shapes were circular, such as

But it became apparent that industrial printing techniques could not print accurately on surfaces with compound curvature (i.e., non-zero Gaussian curvature, meaning it curves in two directions). So we switched to the flat-sided design you see on Kickstarter. This design also had a bonus that the markings much easier to read than on both our previous prototypes and most existing measuring cups.

Manufacturing

Talking to manufacturers was both thrilling and challenging. It was thrilling because they run heavy machinery that takes as input raw materials and usually some software and produces useful physical objects. As a software engineer, my best efforts just push around electrons.

Talking to manufacturers was challenging because the domain knowledge is very different from that of software engineering. For example, there are engineers who specialize in something called Design For Manufacturing (DFM). You might think you come up with a design, prototype, do some testing and then hand it off to a manufacturer. Oh no no, my friend. If you don’t have in mind the technical constraints of manufacturing when you design, you probably are in for a surprise and some redesign work.

Manufacturers of course want business, but there is also a cost to them for talking with you. That includes the time of their in-house engineer to analyze your design, the risk that you might cause problems for them down the line because you’re inexperienced, and of course the opportunity cost of not working with someone who might place a larger order than you intend to.

I generally found manufacturers very polite and willing to both quote my designs and give useful feedback if there were issues.  Through many iterations and discussions with many manufacturers, along with advice from DFM experts, the design was refined to the point where they now don't see any issues.

I’ll save a detailed discussion of manufacturing for another post, but below are two of the bigger limitations to injection molding.  Limitations? To injection molding? Yup.

First, constant wall thickness is important. Injection-molding does not tolerate well large variations in thickness of the part. Partly this is because plastic shrinks as it cools and thicker sections shrink differently than thinner sections. Thick blobs of plastic also can have problems with air bubbles, and thick blobs also increase cost since the part needs to cool in the mold longer, which is time the machine can’t be making the next part.

Second, the way plastic flows when it's being injected matters. The way it works is this. The mold is a block of steel with a cavity into which hot, liquid plastic is squirted under high pressure. The plastic enters through a hole in the mold (called a “port”) and flows to fill the entire mold. As the plastic flows, it cools because the steel is relatively cold. The steel is cool partly to help the part solidify quickly so the machine can move on to make the next part. As the plastic cools, it becomes thicker and starts to harden. One reason the plastic enters under high pressure is so it fills the entire mold before it hardens.

The consequence of this is that sections of the mold that are further from the injection port are going to receive plastic that is cooler and thicker, and so the shape that the plastic can form is more limited. Also, shrinkage of plastic that is farther from the injection port differs from that of plastic closer to the injection port.

Luckily there are great software flow analysis tools out there that simulate how the plastic fills a mold and can predict what sort of issues might arise.

The rest

I skipped over a lot of detail about manufacturing and haven't even talking about printing and industrial inks.  Nor have I talked about selecting a specific plastic and the range of material properties of plastics. Stay tuned or ask questions...

Friday, February 17, 2017

Converting video to high-quality gif on linux (via ffmpeg)

A brief post today. Recently, I've been taking videos of my screen and converting them to animated gifs.
The command I use to record the video, for example to record 15 seconds of video at 25 fps, sized 830x150 pixels at offset from the top of x=230,y=360 is:
avconv -f x11grab -y -r 25 -s 830x150 -i :0.0+230,360 -vcodec libx264 -crf 0 -threads 4 -t 15 myvideo.mp4

For a while I was using a one-liner to convert the video to animated gifs using ffmpeg and ImageMagick's convert command based on this post. Below it samples the mp4 at 15 fps.

ffmpeg -i myvideo.mp4 -r 15 -f image2pipe -vcodec ppm - | convert -delay 7 -loop 0 - gif:- | convert -layers Optimize - myvideo.gif

The problem I ran into was the the color palette choice in the gifs was poor and creating weird artifacts.

So I came across http://blog.pkh.me/p/21-high-quality-gif-with-ffmpeg.html, which layed out another strategy, resulting in the following two commands:

ffmpeg -v warning -i myvideo.mp4 -vf "fps=25,palettegen" -y /tmp/pallette.png
ffmpeg -v warning -i myvideo.mp4 -i /tmp/pallette.png -lavfi "fps=25 [x]; [x][1:v] paletteuse" -y myvideo.gif

or, if you don't need to change the frame rate:

ffmpeg -v warning -i myvideo.mp4 -vf "palettegen" -y /tmp/pallette.png
ffmpeg -v warning -i myvideo.mp4 -i /tmp/pallette.png -lavfi paletteuse -y myvideo.gif

I found this approach much better. The color artifacts were gone and also the file sizes were much smaller! It chopped about 50-80% off of the file size compared to the previous ffmpeg/convert pipeline.

Tuesday, September 13, 2016

Connecting to wifi without NetworkManager on Ubuntu 16.04 (using wpa_supplicant, dnsmasq, etc)

Suppose NetworkManager in Ubuntu is getting in your way and not letting you configure your wifi how you'd like. You'd like to temporarily turn off NetworkManager and connect to wifi in a lower-level way, without making any permanent changes that will interfere with NetworkManager. Below is the series of commands that worked for me. The approach below plays well with the underlying dnsmasq and wpa_supplicant that NetworkManager uses.

The first step is turning off NetworkManager. This seems to persist across sleeping/waking my laptop.

sudo service NetworkManager stop

Next, get the name of your wireless interface. On my laptop it is wlp3s0, but it might be some other 'w' name like wlan0. All the code snippets below use wlp3s0. Change appropriately if your interface name is different. To find the interface name, you could do "ip link" and look for an entry that begins with "w", or more programatically:

$ iw dev | awk '/Interface/{print $2;}'
wlp3s0

Next, take down the interface, make changes, and bring it back up.

sudo ip link set dev wlp3s0 down
sudo ip link set dev wlp3s0 blah blah blah  # make any changes you want here
sudo ip link set dev wlp3s0 up

Now for some magic. wpa_supplicant is a process that knows how to connect/associate with an access point using wpa security (unlike iwconfig, which apparently only knows about the weaker WEP). It looks like, when you turn off NetworkManager, it tells wpa_supplicant to forget about wireless interfaces. This means that command-line tools like wpa_cli, that talk with wpa_supplicant, won't work. So the way to tell wpa_supplicant about the wireless interface again is:

$ sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path /fi/w1/wpa_supplicant1 --method fi.w1.wpa_supplicant1.CreateInterface "{'Ifname':<'wlp3s0'>,'Driver':<'nl80211,wext'>}"
(objectpath '/fi/w1/wpa_supplicant1/Interfaces/25',)
Note down the path it returns (/fi/w1/wpa_supplicant1/Interfaces/25 in this example). You'll use it when restoring NetworkManager, below.

Now wpa_cli should start working. It looks like NetworkManager configures wpa_supplicant to listen on a control socket different from the default place wpa_cli looks, so you'll need an extra arg to wpa_cli, "-p /run/wpa_supplicant", as below.

wpa_supplicant and thus wpa_cli have the notion of a 'network', which is an access point you'd like wpa_supplicant to try to connect to. Taking down NetworkManager should have removed any networks, but to be sure, try:

sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 list_networks

At this point, you can scan for wifi networks you'd like to connect to (I fuzzed the output below to not reveal actual BSSIDs or SSIDs):

$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 scan
OK
$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 scan_results
bssid / frequency / signal level / flags / ssid
20:e5:2a:2b:34:d1 2442 -58 [WPA2-PSK-CCMP][WPS][ESS] Josh's Network
22:86:8c:e1:33:70 2462 -78 [ESS] xfinitywifi

Create a 'network' to associate with. This prints an integer. It should be I think 0, since NetworkManager removed networks when we stopped it. That '0' appears in the wpa_cli commands, below. Change if add_network returns something other than 0.

$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 add_network
0

Pick an ssid and set the wpa password

$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 set_network 0 ssid "\"Josh's Network\""
OK
$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 set_network 0 psk '"a passphrase"'
OK
# Alternatively, to connect without any passphrase, you can say
# sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 set_network 0 key_mgmt NONE

Enabling the network should cause wpa_supplicant to connect

$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 enable_network 0
OK

To check status of connection (output below a bit fuzzed to not reveal actual address info):

$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 status
bssid=20:e5:2a:2b:34:d1
freq=2442
ssid=Josh's Network
id=0
mode=station
pairwise_cipher=CCMP
group_cipher=CCMP
key_mgmt=WPA2-PSK
wpa_state=COMPLETED
address=00:19:d1:d4:d9:22
uuid=61294555-153f-568c-9ed7-36af41fff2e0

Once you're connected, which I think is when the "wpa_state=COMPLETED" shows up in status, get an IP

sudo dhclient -v wlp3s0

Next set up DNS. NetworkManager is set up to use dnsmasq. To communicate with dnsmasq, do something like this (to set it to use opendns servers):

$ sudo dbus-send --system --print-reply --dest=org.freedesktop.NetworkManager.dnsmasq /uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers array:string:"208.67.222.222","208.67.220.220"
method return time=1473770256.196485 sender=:1.165 -> destination=:1.197 serial=11 reply_serial=2
That's it!
$ ping google.com
PING google.com (209.85.232.139) 56(84) bytes of data.
64 bytes from qt-in-f139.1e100.net (209.85.232.139): icmp_seq=1 ttl=41 time=32.0 ms
64 bytes from qt-in-f139.1e100.net (209.85.232.139): icmp_seq=2 ttl=41 time=32.3 ms

To then to tear things down when done and restart NetworkManager:

$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 disable_network 0
OK
$ sudo wpa_cli -p /run/wpa_supplicant -i wlp3s0 remove_network 0
OK
sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path /fi/w1/wpa_supplicant1 --method fi.w1.wpa_supplicant1.RemoveInterface "'/fi/w1/wpa_supplicant1/Interfaces/13'"
()
$ sudo service network-manager start

Below is a script putting this all together, for your reference. I didn't focus much on robustness, so it's a little fragile:

#!/bin/bash

set -o pipefail # Make it so return code from pipe is last one to fail

# https://w1.fi/wpa_supplicant/devel/dbus.html - super handy API reference for talking with wpa_supplicant via gdbus

setup() {
    wif=$(iw dev | awk '/Interface/{print $2;}')
    expectNoNetworks=0
    if (sudo service network-manager status | egrep '^\s+Active: active ' > /dev/null); then
 echo "Stopping NetworkManager"
 sudo service network-manager stop || exit
 sleep 1 # Give /run/wpa_supplicant time to disappear
 if [ -e /run/wpa_supplicant ]; then
     echo "error: /run/wpa_supplicant exists"
     exit 1
 fi
 
 sudo ip link set dev $wif down || exit
        #
 #  Make any changes to the device in here.....
        #
        #
 sudo ip link set dev $wif up || exit
 expectNoNetworks=1
    fi

    echo "Checking wpa_supplicant interface for $wif"
    if ! pre="$(sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path /fi/w1/wpa_supplicant1 --method fi.w1.wpa_supplicant1.GetInterface "'$wif'" 2> /dev/null )"; then
 echo "  Creating interface"
 pre="$(sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path /fi/w1/wpa_supplicant1 --method fi.w1.wpa_supplicant1.CreateInterface "{'Ifname':<'$wif'>,'Driver':<'nl80211,wext'>}")" || exit
    fi
    # Converts something like (objectpath '/fi/w1/wpa_supplicant1/Interfaces/25',)   ==>    /fi/w1/wpa_supplicant1/Interfaces/25
    ifPath=$(echo $pre | sed -e "s/^.*'\(.*\)'.*$/\1/")
    
    if [ ! -e /run/wpa_supplicant ]; then
 echo "error: /run/wpa_supplicant does not exist"
 exit 1
    fi

    if ! netInt=$(sudo wpa_cli -p /run/wpa_supplicant -i $wif list_networks | awk '/^[0-9]+\s/ { print $1;}'); then
 echo "List networks failed"
 exit 1
    fi
    if [ "$netInt" != "0" ] || [ $expectNoNetworks -eq 1 ]; then
 if [ "$netInt" ]; then
     echo "Removing any networks"
     sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path $ifPath --method fi.w1.wpa_supplicant1.Interface.RemoveAllNetworks || exit
 fi
     # We don't expect NetworkManager to have left around any networks, but get rid of them in any case
 netInt="$(sudo wpa_cli -p /run/wpa_supplicant -i $wif add_network)" || exit
    fi
    echo "netInt is $netInt, ifPath is $ifPath"
}
command="start"
if [ "$1" ]; then
    command="$1"
fi
case $command in
    setup)
 setup
 exit 0
 ;;
    list)
 setup
 echo "Doing a AP scan"
 sudo wpa_cli -p /run/wpa_supplicant -i $wif status
 sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state'
 sudo wpa_cli -p /run/wpa_supplicant -i $wif scan > /dev/null || exit
 while sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state=SCANNING' > /dev/null; do
     sleep 0.1
 done
 # Reformat scan results to be a bit prettier.
 sudo wpa_cli -p /run/wpa_supplicant -i $wif scan_results | grep -v 'bssid / frequency / signal level / flags / ssid' | sort -nr -k 3 | awk 'BEGIN { FS="\t"; printf "Signal\n"; printf "%6s %-30s %-50s %s %s\n", "Level", "SSID", "Flags", "Frequency", "BSSID"; } { printf "%6s %-30s %-50s %-9s %s\n", $3, $5, $4, $2, $1; }'

 read -p "Enter a SSID to connect to: " aSsid
 if [ -z "$aSsid" ]; then
     echo "Can not have empty ssid, i think"
     exit 1
 fi
 #sudo wpa_cli -p /run/wpa_supplicant -i $wif status
 sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state'
 if ! sudo wpa_cli -p /run/wpa_supplicant -i $wif status | egrep 'wpa_state=(INACTIVE|DISCONNECTED)' > /dev/null; then
     echo "Disconnecting current connection"
     sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path $ifPath --method fi.w1.wpa_supplicant1.Interface.Disconnect > /dev/null || exit
 fi
 echo "Disabling network"
 sudo wpa_cli -p /run/wpa_supplicant -i $wif disable_network 0  > /dev/null || exit
 sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state'
 echo "Setting ssid"
 sudo wpa_cli -p /run/wpa_supplicant -i $wif set_network 0 ssid "\"$aSsid\""  > /dev/null || exit
 read -sp "Enter WPA password, or leave blank to try connecting with no password (chars will not echo): " aPassword
 if [ -z "$aPassword" ]; then
     echo "using no passowrd"
     sudo wpa_cli -p /run/wpa_supplicant -i $wif set_network 0 key_mgmt NONE > /dev/null || exit
 else
     sudo wpa_cli -p /run/wpa_supplicant -i $wif set_network 0 psk "\"$aPassword\""  > /dev/null || exit
 fi
 echo "Enabling network"
 sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state'
 sudo wpa_cli -p /run/wpa_supplicant -i $wif enable_network 0  > /dev/null || exit
 if sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state=DISCONNECTED' > /dev/null ; then
     echo "Selecting network $netInt"
     sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path $ifPath --method fi.w1.wpa_supplicant1.Interface.SelectNetwork "$ifPath/Networks/$netInt" > /dev/null || exit
 fi

 echo "Waiting for wpa_supplicant to get into COMPLETED state"
 while ! sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state=COMPLETED' > /dev/null ; do
     sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state'
     sleep 0.5
 done
 
 echo "Now getting IP"
 sudo dhclient -v $wif || exit

 echo "Setting openDNS"
 sudo dbus-send --system --print-reply --dest=org.freedesktop.NetworkManager.dnsmasq /uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers array:string:"208.67.222.222","208.67.220.220" > /dev/null || exit
 echo "Done!"
 exit 0
 ;;
    done)
 if (sudo service network-manager status | egrep '^\s+Active: active ' > /dev/null); then
     echo "Looks like NetworkManager is active, so I think we're done"
     exit 0
 fi
 setup
 echo "Disabling Network"
 sudo wpa_cli -p /run/wpa_supplicant -i $wif disable_network 0  > /dev/null || exit
 echo "Removing all networks"
 sudo wpa_cli -p /run/wpa_supplicant -i $wif remove_network $netInt || exit
 #sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path $ifPath --method fi.w1.wpa_supplicant1.Interface.RemoveAllNetworks || exit
 sudo wpa_cli -p /run/wpa_supplicant -i $wif status | grep 'wpa_state'
 echo "Removing interface"
 sudo gdbus call --system --dest=fi.w1.wpa_supplicant1 --object-path /fi/w1/wpa_supplicant1 --method fi.w1.wpa_supplicant1.RemoveInterface "'$ifPath'" || exit
 sleep 1
 echo "Restarting NetworkManager"
 sudo service network-manager start || exit
 echo "Done"
 exit 0
 ;;
    *)
 echo "Command arg is (setup | list | done)"
 exit 1
    ;;
    
esac

Monday, June 27, 2016

Running a tiny node.js webserver hosting multiple domains with SSL on google compute engine

EDIT: 3/19/2017 - improved server security

I recently migrated a few low-traffic domains from being hosted on namecheap shared hosting to running on a private node.js server on google compute engine. This was partly because I wanted to run a node.js app on one of the domains, and partly because I was tired of the poor quality of namecheap hosting. I wanted to share what ended up working with anyone else who's trying to do something similar.

I picked Google Compute Engine for hosting partially because an app I'm developing uses google cloud datastore and partly because they had a small machine type that looked like it might work. Their pricing lists a machine type, f1-micro, with 0.6 GB which costs $.0056/hour, or around $4/month.

Let's jump in. The webserver we create will be serving pages for multiple domains. Later we'll get the IP address for the server and point multiple domains at it. Let's assume we have four domains: example1.com, www.example1.com, example2.com, www.example2.com. Let's assume our server is server.js and we run it by node server.js. Note: I've simplified the code in this post a bit from what I actually use, to remove parts that aren't relevant to this post. There's a chance there are bugs in it. Please let me know if you find anything amiss.

Let's say our directory structure for our app is:

server.js
package.json
static-example1/
static-example2/
where the static directories are where we'll serve static files for our domains.
  1. server.js config. I use the express framework. We'll set it up so that HTTP redirects to HTTPS and also www.example* redirects to example*. SSL/acme/letsencrypt explained below.
    var express = require("express");
    var http = require('http');
    var https = require('https');
    var app = express();
    
    // letsencrypt verification requests will be HTTP.  Let them proceed without any of the redirection/https checking below.
    app.use('/.well-known/acme-challenge/', express.static('static-acme-challenge/.well-known/acme-challenge'));
    app.use('/.well-known/acme-challenge/',function(req,res, next) {
        res.status(404).send('letsencrypt challenge file missing');
    });
    
    app.use(function redirects(req, res, next) {
        var host = req.headers.host;
        if ((host == 'example1.com' || host == 'example2.com') && req.secure) {
            // good to go
     next();
        } else if (host == 'www.example1.com') {
            // redirect both to HTTPS as well as get rid of the www subdomain.
     res.redirect('https://example1.com' + req.url);
        } else if (host == 'www.example2.com') {
            // redirect both to HTTPS as well as get rid of the www subdomain.
     res.redirect('https://example2.com' + req.url);
        } else if (!req.secure) {
     res.redirect('https://' + req.headers.host + req.url);
        } else {
     should never get here....
        }
    });
    
    var vhost = require('vhost');
    // You could change this up so that instead of serving static files you do more interesting routing.  Beyond scope of this blog..
    app.use(vhost('example1.com', express.static('static-example1')));
    app.use(vhost('example1.com', function(req, res, next) {
        res.status(404).send('no static file found for example1.com');
    }));
    app.use(vhost('example2.com', express.static('static-example2')));
    app.use(vhost('example2.com', function(req, res, next) {
        res.status(404).send('no static file found for example2.com');
    }));
    app.use("*",function(req,res) {
       this should never happen - all requests should have been caught by one of the clauses above.
    });
    
    var httpServer = http.createServer(app);
    var httpsServer = null;
    if (fs.existsSync("./le-config/live/example1.com/privkey.pem")) {
        httpsServer = https.createServer({
            key: fs.readFileSync("./le-config/live/example1.com/privkey.pem"),
            cert: fs.readFileSync("./le-config/live/example1.com/fullchain.pem"),
            ca: fs.readFileSync("./le-config/live/example1.com/chain.pem")
        }, app);
    } else {
      console.log('No SSL certs found.  Assuming we are bootstrapping with no https');
    }
    httpServer.listen(80);
    httpsServer.listen(443);
    
  2. Get code somewhere that google compute engine can find it. We put it in a git repository, with server.js in the root dir of the repository. Then it's easy to use Google cloud repositories to get it to the VM. According to this, a cloud repository is created for your project. To set up for pushing to it:
    git config credential.helper gcloud.sh
    git remote add cloud https://source.developers.google.com/p/your-project-id/
    
    Substitute "your-project-id" with your google project id (probably is something like myproj-65432. Then you can push with
    git push google master
  3. Now we need to create a startup script that the VM instance will run when it first starts up. Here's the simplified version of the script I use. I put it in the file startup-script.sh, in the same directory as server.js, though it doesn't need to be.
    #! /bin/bash
    # [START startup]
    set -v
    
    # Talk to the metadata server to get the project id
    PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
    USERNAME=something # Replace with your username you use to log in to the server
    
    # Set up a 512MB swap partition.  The 600 MB of ram the VM has is not quite enough to do the 'npm install' command below.
    fallocate -l 512m /mnt/512MiB.swap
    chmod 600 /mnt/512MiB.swap
    mkswap /mnt/512MiB.swap
    swapon /mnt/512MiB.swap
    
    # [START the rest]
    # Debian has old version of node.  Get fresh one.  This does an apt-get update
    curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
    
    # Some necessary packages
    apt-get install -yq ca-certificates git nodejs build-essential supervisor libcap2-bin authbind unattended-upgrades
    apt-get install certbot -t jessie-backports
    
    # Create a nodeapp user. Node will run as this user.  This account does not have privileges to run a shell.
    useradd -M -d /usr/sbin/nologin nodeapp
    
    # Users seem to be added in default gropus that include a group with sudo privileges.  Remove nodeapp from those groups
    for agr in `groups nodeapp | cut -f 2 -d ':'`; do
        if [ $agr != "nodeapp" ]; then
     echo "Removing nodeapp from group $agr"
     gpasswd -d nodeapp $agr
        fi;
    done
    
    # Set default group to nodeapp
    /usr/sbin/usermod -g nodeapp $USERNAME
    # Set default permissions for new files so only nodeapp group can read them
    grep -q '^\s*umask ' /home/$USERNAME/.profile && sed -i 's/^\s*umask .*/umask 0026/' /home/$USERNAME/.profile || echo 'umask 0026' >> /home/$USERNAME/.profile
    
    # /opt/app will hold the git repo containing server.js
    mkdir /opt/app
    chown nodeapp:nodeapp /opt/app
    cd /opt/app
    
    # For authbind - let nodeapp bind to only ports 80 and 443
    touch /etc/authbind/byport/80
    touch /etc/authbind/byport/443
    chown nodeapp /etc/authbind/byport/80
    chown nodeapp /etc/authbind/byport/443
    chmod 755 /etc/authbind/byport/80
    chmod 755 /etc/authbind/byport/443
    
    
    # Create the script that cron will run every 2 weeks to renew our SSL certs
    # Note it restarts the server every 2 weeks as part of this.
    cat >/tmp/renew-cert.sh << EOF
    #! /bin/bash
    cd /opt/app
    echo "`whoami`:`pwd`: Checking renewal on `date`:"
    certbot renew --non-interactive --email subadmin@example1.com --agree-tos --debug --test-cert --config-dir ./le-config --work-dir ./le-work --logs-dir ./le-logs --webroot --webroot-path ./static-acme-challenge
    sudo supervisorctl restart nodeapp
    EOF
    chown $USERNAME:nodeapp /tmp/renew-cert.sh
    
    # Run a bunch more setup not as root
    su - $USERNAME << EOF
    # Get the application source code from the Google Cloud Repository.
    git config --global credential.helper gcloud.sh
    git clone https://source.developers.google.com/p/$PROJECTID /opt/app
    
    # Create directory for the letsencrypt challenges to be served by server.js
    mkdir static-acme-challenge
    
    # Install app dependencies specified in package.json
    # --production means to skip dev dependencies
    npm install --production
    
    # setup the cron script
    mv /tmp/renew-cert.sh .
    chmod u+x renew-cert.sh
    (crontab -l && echo "01 02 2,16 * * /opt/app/renew-cert.sh >> /opt/app/le-logs/cron-renews 2>&1") | crontab -
    EOF
    
    # Now that the npm installation has finished, we no longer need swap, so get rid of it.
    swapoff -a
    rm /mnt/512MiB.swap
    
    # Configure supervisor to run the node app.  Limit the amount of RAM node uses via the flags specified.
    cat >/etc/supervisor/conf.d/node-app.conf << EOF
    [program:nodeapp]
    directory=/opt/app
    command=authbind node --trace_gc --max_old_space_size=256 --max_semi_space_size=16 --max_executable_size=256 server.js
    autostart=true
    autorestart=true
    user=nodeapp
    environment=USER="nodeapp",NODE_ENV="production",ONGOOG="yes"
    stdout_logfile=syslog
    stderr_logfile=syslog
    EOF
    # Start the server.js.  Note it does not have SSL certs yet, it is running just so certbot can do the letsencrypt challenges
    supervisorctl reread
    supervisorctl update
    # Give server a chance to start up
    sleep 5
    
    # Make unattended apt upgrades reboot when they happen
    egrep -q "^Unattended-Upgrade::Automatic-Reboot" /etc/apt/apt.conf.d/50unattended-upgrades || echo 'Unattended-Upgrade::Automatic-Reboot "true";' >> /etc/apt/apt.conf.d/50unattended-upgrades
    egrep -q "^Unattended-Upgrade::Automatic-Reboot-Time" /etc/apt/apt.conf.d/50unattended-upgrades || echo 'Unattended-Upgrade::Automatic-Reboot-Time "02:15";' >> /etc/apt/apt.conf.d/50unattended-upgrades
    
    # Generate the SSL certificates for the domains we wish to serve.
    domains=('example1.com', 'www.example1.com',
             'example2.com', 'www.example2.com')
    # Generated a string containing each domain prefixed with a '-d'
    extraargs=""; for i in "${domains[@]}"; do extraargs="$extraargs -d $i"; done
    #debugflags="--debug --test-cert"
    debugflags=""
    # Generate the SSL certs.  Run as nodeapp
    su - $USERNAME -c "certbot certonly --non-interactive --email subadmin@example1.com --agree-tos --config-dir ./le-config --work-dir ./le-work --logs-dir ./le-logs --webroot --webroot-path ./static-acme-challenge $debugflags $extraargs"
    
    # Restart server to pick up the SSL certs
    supervisorctl restart
    
    # Application should now be running under supervisor
    # [END startup]
    

    A few notes on the script. First, the main trick in getting node to run on the f1-micro instance is managing memory. We do this by temporarily adding a swap partition to absorb the extra memory the npm install uses, and by passing flags to node (they actually are v8 flags) to limit the resident memory usage of the node binary. Node/v8 does not have great tools for managing memory usage. In particular, I saw no way to limit the virtual memory size. I tried ulimit and node would not start, failing when allocating memory.

    Regarding SSL. I went with letsencrypt since it's free. I wanted really badly to be able to use the letsencrypt-express npm package. It promised so much convenience. I banged my head against it for a few days and could not get it to work. I dug into the source code, tried fixing bugs, and eventually gave up. The source didn't seem very maintainable and did not inspire confidence. I switched to the letsencrypt natively supported certbot, specifically with their other debian-8 mode. Worked like a charm.

    letsencrypt requires that you be able to prove you control the domains you want certs for. The way I use to do this is by letting certbot put files on the server that the letsencrypt CA can then find. I put the challenges in the static-acme-challenge directory.

  4. Now let's set up the server. First, setup the gcloud command-line tool. This will involve creating a project and other steps detailed in the Getting Started Guide they refer to there. In this blog, let's assume your project is called myproj. Terminology: Google calls a VM running on a machine an "instance". We'll be creating an instance that will run our node server.
  5. Set the default region and zone. I picked a zone by running gcloud compute machine-types list and choosing the region listed for the machine type f1-micro (us-east1-b when I did it). Not sure if that's necessary.
  6. Reserve an static external IP address. Without this, the instance will not necessary show up at the same IP each time it's (re)started. Note you are charged ($0.01/hr) if the IP address is not used by a running instance, so don't leave it lying around if you don't need it.
    gcloud compute addresses create myproj-static-ip

    You may now set any A records to point that address. This is necessary partly so the letsencrypt can verify you own the domains when generating the cert inside of startup-script.sh. That means downtime on your websites. You can avoid that by using another mechanism to prove to letsencrypt you control the domains (e.g., adding TXT records to DNS, or acting on the letsencrypt challenges using your old server), and modifying startup-script.sh, but that's outside the scope of this tutorial.

  7. Create the VM instance. There are lots of options when running the create command. I create a VM based on debian because it's the default and thus I figured the least likely to have issues. I include "datastore" below because I need to access the datastore, but you can omit it. This command
    gcloud compute instances create myproj-instance --machine-type=f1-micro --image-family=debian-8 --image-project=debian-cloud --scopes userinfo-email,datastore,cloud-platform --zone us-east1-b --metadata-from-file startup-script=startup-script.sh --tags https-server --address myproj-static-ip
  8. Now you can do things like
    # Get the console output from the running VM
    gcloud compute instances get-serial-port-output myprof-instance
    
    # ssh into the VM.  You can omit the 'nodeapp@' bit.
    gcloud compute ssh nodeapp@myproj-instance
    
  9. Tell google to expose your new server to the internet, but opening up the ports in the firewall
    gcloud compute firewall-rules create http-and-https --allow tcp:80,tcp:443 --description "Allow http and 
    https access to http-server" --target-tags https-server
    
  10. If you want to rip the thing down, you do:
    gcloud compute firewall-rules delete http-and-https
    gcloud compute instances delete myproj-instance
    gcloud compute addresses delete myproj-static-ip
    

Enjoy and hope it's helpful.

Thursday, October 15, 2015

Backing up your master password and secret sharing

I created a tool implementing a cryptographic secret sharing algorithm, which I use for backing up my master password. The tool is at: equatinelabs.com/secretshare.html.

Here's the motivation and how it works:

Let's say you use a password manager to remember your passwords, and the password manager requires a master password. If you forget the master password, you loose access to the other passwords. So how do you backup your master password?

One possibility is giving your master password to trusted people such as family members. A problem with that approach is that one of the people you share it with might accidentally reveal it, post it on the internet, leave the piece of paper on which it is written somewhere a malicious person can see it, etc.

Another possibility is to write your password down, tear it in half and give each half to a trusted person. Then any single person doesn't know the entire password, so if they reveal it, your password still is not entirely revealed. However, if someone malicious sees one half of the password, it makes it much easier to guess the other half. Another problem with this approach is that if either of the two people loose the slip of paper, then you can no longer reconstruct your master password.

Enter Shamir's scheme for secret sharing, based on an old but neat paper. It shows how to divide a secret into a number of "shares" such that one share provides no information about the secret, and any two shares are sufficient to reconstruct the secret. It actually generalizes to any number of minimum shares, but we'll use the version that requires a minimum of two shares to reconstruct the secret.

The idea is that a line is defined by two points. Knowing one point does not help determine the slope or intercept of the line. Knowing two points is enough to determine both. So in Shamir's scheme each point is a share and the line slope or intercept is the secret. The equation for a line in 2D is: y = ax + b.

The tool I created uses this as follows. The user types in a secret. The tool generates a long (128-bit) random number for both of a and b (i.e., the slope and intercept of a line). The slope (parameter a) will be the encryption key used to encrypt the secret. Each share contains the coordinates of one distinct point along the line. Thus knowing one share doesn't tell you anything about the encryption key, and with two shares, it is straight-forward to figure out the slope and then decrypt to recover the secret.

The tool encrypts the secret with AES-128 using the 128-bit random key. Shares are represented in base-64. I took all the easy-to-generate characters on the keyboard, threw away a few confusing ones (such as o, O, l, and .), and that left 64. I could shrink the alphabet down to something like hex, but then the length of the shares would become longer.

The math to manipulate the shares and the secret key is done using a finite field based on the prime 2^128-159. Another note, I also threw in a few bits/bytes of checksum, both into the encrypted secret as well as the shares, to detect mistypes.

Feel free to browse the javascript source code of the tool, it is not minified. It uses the crypto routines in the Forge (client-side javascript library). It does not send any information over the network, except for Google analytics, though feel free to verify that for yourself. I'm happy to provide more details.

Lastly, anything crypto-related is notoriously difficult to get right, including this tool. Please use with care, and it'd be great to have another pair of eyes on the javascript source code if anyone cares to browse!

Saturday, February 21, 2015

Solidworks Macros via Python

I finally figured out how to write Solidworks macros in python (yay!). Almost all the of Solidworks API works with one exception described at the end of this post. The Solidworks API is via the windows COM interface (ugh).

Here's the initial setup:

  1. Download and install Python. I used Active Python 2.7.8
  2. Get Solidworks. Python macros have worked pretty seamlessly across 2012, 2014 and 2015.
  3. Get familiar with the Solidworks online API help. E.g., http://help.solidworks.com/2014/english/api/sldworksapiprogguide/Welcome.htm is for the 2014 API. Note you can change the year in the URL to access the docs for other versions of Solidworks.
Ok, with that, we can dig right in. Before I run a macro, I make sure Solidworks is running and the document I want to modify is active (e.g., visible on the screen). I think you can use macros to start solidworks and open documents, but I'm less familiar with those commands.

Basic startup

This code snippet connects to running instance of Solidworks of the year specified. For example to connect to Solidworks 2015, set swYearLastDigit = 5:
import win32com.client
import pythoncom
swYearLastDigit = 5
sw = win32com.client.Dispatch("SldWorks.Application.%d" % (20+(swYearLastDigit-2)))  # e.g. 20 is SW2012,  23 is SW2015
You can also invoke Dispatch without the year specification, as in ....Dispatch("SldWorks.Application"). If there's only one version on your machine, this connects to that version.

At this point, the python code looks similar to the VBA code in the API docs. Sometimes you have to play with whether a function wants args or not. Here's the next piece of the boilerplate I have at the beginning of my scripts:

model = sw.ActiveDoc
modelExt = model.Extension
selMgr = model.SelectionManager
featureMgr = model.FeatureManager
sketchMgr = model.SketchManager
eqMgr = model.GetEquationMgr
As an example of difference in arguments, consider the Equation method on the IEquationMGR object (eqMgr in my code above). The 2014 API docs for the Equation member says that you read an equation by reading Equation(idx), and set by putting an equal sign after the expression. In python the binding is a bit different:
print("Equation 1 is: " + eqMgr.Equation(1))
eqMgr.Equation(1, "\"myVar\" = 42")
print("Equation 1 is now: " + eqMgr.Equation(1))
The most common difference I see between the Visual Basic docs and python are whether to put parenthesis after the member name or not. I just try both and see which works.

By the way, I see little rhyme or reason to the return values of method invocations, both at the API level as well as the values returned in practice. I usually go with the API docs, and assert return values, then delete the assertion if/when the method doesn't follow the API docs.

Creating arguments of the correct type (aka, getting SelectById2 to work)

Sometimes the method requires some fancy arguments, like reference arguments, or you otherwise just can't figure out what the thing is expecting. The Visual Basic interface is better at automatically converting types into the appropriate COM objects. The python bindings for the API don't work quite as well all the time. So here's what to do when you need to dig deeper and understand how to invoke a method:
  1. Generate the static python COM bindings for solidworks
    1. First, run python c:\Python27\Lib\site-packages\win32com\client\makepy.py
    2. Select "SldWorks 2015 Type Library" and hit OK. You'll see output like this:
      Generating to C:\Users\myhappyuser\AppData\Local\Temp\gen_py\2.7\83A33D31-27C5-11CE-BFD4-00400513BB57x0x23x0.py
      Building definitions from type library...
      Generating...
      Importing module
      
      The exact file name may change depending on your version of Solidworks.
  2. Open up that generated file in a viewer, like Komodo or Notepad
  3. Open up the web page VARIANT Type Constants in a browser
The generated python file has info on what arguments each method is expecting and that web page helps decode the arguments into something a bit more actionable.

Let's work a few common examples.

First let's try the macro command to select an object by name. The recommended version of the method is modelExt.SelectByID2. If you try putting in some actual args, you'll see:

c:\Users\happyuser\Documents\MeasuringCup>python
ActivePython 2.7.8.10 (ActiveState Software Inc.) based on
Python 2.7.8 (default, Jul  2 2014, 19:48:49) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import win32com.client
>>> sw = win32com.client.Dispatch("SldWorks.Application")
>>> model = sw.ActiveDoc
>>> modelExt = model.Extension
>>> modelExt.SelectByID2("mysketch", "SKETCH", 0, 0, 0, False, 0, None, 0)
Traceback (most recent call last):
  File "", line 1, in 
  File ">", line 2, in SelectByID2
pywintypes.com_error: (-2147352571, 'Type mismatch.', None, 8)
The last number in the Type mismatch. line indicates the argument that is causing the problem. In this case it is the None, which is the eighth argument. The API man page says it is expecting a "pointer to a callout". We just want to pass None, but the None isn't getting converted to the right COM object, so we have to do the conversion manually.

To do this, open the generated python file, for Solidworks 2015 it should be named 83A33D31-27C5-11CE-BFD4-00400513BB57x0x23x0.py, and search for 'SelectById2'. You should find a hit that looks like:

	def SelectByID2(self, Name=defaultNamedNotOptArg, Type=defaultNamedNotOptArg, X=defaultNamedNotOptArg, Y=defaultNamedNotOptArg
			, Z=defaultNamedNotOptArg, Append=defaultNamedNotOptArg, Mark=defaultNamedNotOptArg, Callout=defaultNamedNotOptArg, SelectOption=defaultNamedNotOptArg):
		'Select a specified entity'
		return self._oleobj_.InvokeTypes(68, LCID, 1, (11, 0), ((8, 1), (8, 1), (5, 1), (5, 1), (5, 1), (11, 1), (3, 1), (9, 1), (3, 1)),Name
			, Type, X, Y, Z, Append
			, Mark, Callout, SelectOption)
The list of tuples ( ((8,1), (8, 1), (5, 1), ...) above ) contains info on the expected type of each argument. The eighth tuple corresponds to the problematic eighth argument. That tuple is (9, 1). Now look up '9' in that MSDN web page titled "VARIANT Type Constants" and you'll see it matches with VT_DISPATCH. Here's the magic on how to generate the correct object manually:
    arg1 = win32com.client.VARIANT(pythoncom.VT_DISPATCH, None)
The first arg to VARIANT is the type of the object to create, and the second arg is the initial contents. So now we can use that and:
>>> arg1 = win32com.client.VARIANT(pythoncom.VT_DISPATCH, None)
>>> modelExt.SelectByID2("Sketch1", "SKETCH", 0, 0, 0, False, 0, arg1, 0)
True
Hooray!!!

Now let's try a different example. Continuing on, let's get a sketch we selected:

>>> selMgr = model.SelectionManager
>>> aSketch = selMgr.GetSelectedObject(1).GetSpecificFeature2
>>> aSketch.Name
u'Sketch1'
and let's get the plane it came from:
>>> aSketch.GetReferenceEntity(0)
Traceback (most recent call last):
  File "", line 1, in 
  File ">", line 2, in GetReferenceEntity
pywintypes.com_error: (-2147352571, 'Type mismatch.', None, 1)
Oops, that didn't work. Well looking into the generate python file and searching for GetReferenceEntity, we find:
	def GetReferenceEntity(self, LEntityType=defaultNamedNotOptArg):
		'Get entity that this sketch is created on'
		return self._ApplyTypes_(52, 1, (9, 0), ((16387, 3),), u'GetReferenceEntity', None,LEntityType
			)
And then looking at the VARIANT web page, we find that 16387 = pythoncom.VT_BYREF | pythoncom.VT_I4 So GetReferenceEntity uses an output argument to return the entity type. We can construct an output argument similar to what we did for SelectByID2:
    arg1 = win32com.client.VARIANT(pythoncom.VT_BYREF | pythoncom.VT_I4, -1)
    refPlane = aSketch.GetReferenceEntity(arg1)
and now we can see:
>>> arg1 = win32com.client.VARIANT(pythoncom.VT_BYREF | pythoncom.VT_I4, -1)
>>> arg1.value
-1
>>> refPlane = aSketch.GetReferenceEntity(arg1)
>>> arg1.value
4
To decode the '4', look at the doc for GetReferenceEntity, which points to the doc for an enumeration type swSelectType_e , which says that 4 maps to swSelDATUMPLANES

Constants

If you don't want to put '4' and other random constants in your python code, there are two possibilities. The first is to generate the python Solidworks COM constants bindings:
  1. Run python c:\Python27\Lib\site-packages\win32com\client\makepy.py
  2. Select "SOLIDWORKS 2015 Constant type library"
  3. Add an EnsureModule command early in your python program using the number shown in the output of the makepy command. For example, with Solidworks 2015, that is:
    swconst = win32com.client.gencache.EnsureModule('{4687F359-55D0-4CD3-B6CF-2EB42C11F989}', 0, 23, 0).constants # sw2015
    
Now, by looking at a man page you can find the appropriate constant name in that module, though there isn't much structure there. For example, you can check the return type of the GetReferenceEntity, above by doing:
    assert arg1.value == swconst.swSelDATUMPLANES
This isn't quite as nice as the Visual Basic interface, which structures the constants.

The downside of the EnsureModule approach is that it requires that anyone using your beautiful python morsel to run makepy. A more crude approach is to copy the constants you need from the man pages. For example, I have:

class swconst:
    swSelDATUMPLANES = 4
    .... more constants here ...

A prayer for GetMathUtility

And here is the one bit of the API that I can't get to work. For the life of me, I can't seem to get GetMathUtility to work, and so can not figure out how to create a MathPoint. What happens is:
>>> mathUtil = sw.GetMathUtility
>>> mathUtil.CreatePoint
Traceback (most recent call last):
  File "", line 1, in 
  File "C:\Python27\lib\site-packages\win32com\client\dynamic.py", line 511, in __getattr__
    ret = self._oleobj_.Invoke(retEntry.dispid,0,invoke_type,1)
pywintypes.com_error: (-2147417851, 'The server threw an exception.', None, None)
The output I expect is an error saying an argument is missing. I can not find any argument that placates this method, nor any other method in the mathUtil object returned. Kudos to anyone who can figure this out! It works fine in Visual Basic, so my guess is something is either screwed up in the python binding, or there's some kind of bug in the interface that the visual basic binding manages to avoid.

Tuesday, October 22, 2013

Sturdy shipping tube for art

Since my last post, I've moved to Cambridge, MA. The movers damaged two "ink on parchment" (thangkas) I had, and the only repair person I could find is in New York City (btw, it's Alan Farancz Painting Conservation Studio and they do great work). I couldn't find a decent crate to ship the pieces in, and I also couldn't find a larger diameter shipping tube (so the art wouldn't be rolled too tightly), so I made my own:

I made this:
  1. Start with two foot section of six inch pvc pipe
  2. Get a 6" pvc pipe cap, use hacksaw to reduce the depth a bit, just to save a bit of weight and make it less lopsided-feeling. Glue pipe cap on pipe with pvc cement.
  3. Cut three pieces of cardboard circles to form cap at other end. I used wood glue to stick them to each other offset by ~60 degrees so that they support each other.
On the inside, I added foam and used some stiff paper to fill the middle and provide structure so that thangkas didn't collapse on themselves. I rolled everything up using a few layers of thin bubble wrap. Total cost was << $50. The tube itself was $16. Hopefully this is helpful to anyone else who is trying to figure out how to make a sturdy shipping tube.