## Monday, November 23, 2020

### How to calculate input impedance in LTSpice

This post is a follow on to my previous post on output impedance. This time it is for calculating input impedance. Suppose we have a black box circuit that we will be sending a signal to. The black box could be Rx input pin of an RF transceiver IC, for example.

We want to calculate Zin, the input impedance of the black box. In the case of input impedance, we'll add a 1V AC signal at the frequency of interest (915 MHz in this example), and measure the complex current that flows.

Step 1 is to add the AC source and set up the simulation parameters:

To make the example concrete, we'll pretend the black box consists of an inductor and resistor.

Step 2 is run the simlation. It should output something like:

Now we convert I(V1) from mag/phase into a complex number using this formula:

Plugging in magnitude of 0.0189064 and phase of 160.969 we get 0.01787 + 0.006165j.

[ hand-waving ] I think because we're measuring current through the voltage source, we have to take the complex conjugate, so 0.01787 - 0.006165j.  Though I'm not totally sure that's right. [ /hand-waving ]

Now we know the voltage and current, so using the complex form of V=IR, we can solve for the impedance:

As we can see, the 50 matches the resistance, and we can convert the 17.25j, since the only reactive component is the inductor, using this equation:

Which matches as well. Voila!

## Saturday, November 21, 2020

### How to calculate output impedance in LTSpice

No idea why this isn't readily available on the web, but here it is. Caveat: I'm not an RF engineer, so I might be missing something basic.

Suppose you have a circuit in LTSpice such as follows:
So an AC signal is generated, passes through some black box, and you want to know the output impedance at Vout.  That is, you want to know the impedance of the black box.  For example, it could be that the black box is part of the RF transmit circuit and you want to know the impedance so you know what matching network to build to transform it into 50 ohms.

Step 1: Add an AC simulation directive at the frequency of interest, in this case 915 MHz.
Step 2:  Add a resistor to ground at Vout. The resistor value can be anything, but to avoid precision errors, probably best to put it roughly in same ballpark as anticipated real part of the impedance. Shown below:

Step 3: Simulate.  To make the example concrete, we'll use a simple circuit in place of the black box, of a resistor and inductor, though in practice the circuit could of course be much more complex:

And the simulation results will look like:

Now comes the fun part. First, we'll convert the current through our test resistor I(R1) from magnitude/phase into a complex quantity using the equation:

In our example,  plugging in 0.00514023 and -62.4433 produces I = 0.00237 - j 0.004557.
Current is the same thru the circuit so we can use Z = E / I (the complex form of R = V / I ) to find the total impedance of the circuit:

And then we subtract off our fake load of impedance 40 + 0j to get:

And in this particular case, since we know Zout consists of a resistor plus inductor, we can verify that the real part of Zout matches the resistor R2 (both 50 ohms) and we can use the imaginary part to verify the inductor value:

Voila!

## Tuesday, March 19, 2019

I used reminders extensively in Inbox. Inbox is going away (end of March 2019). Google calendar has a notion of a reminder, but you can't get email notifications of those, so there's no way to keep track of which reminders you haven't done yet. However, calendar events do have email notifications you can enable. That seems close enough. Below is the python code I used to transfer over my reminders, both current and future, to calendar events or email.

The code below has some extra info on the undocumented Google api for searching and deleting reminders as well. Note you need to get an API key before using this code (see below)

```#!/usr/bin/python3 -t
import re
import argparse
import json
import os
import readline  # to enable navigating through entered text
import time
import copy
from typing import Tuple
#from email import encoders
import base64
from email.mime.text import MIMEText

import httplib2
from oauth2client import tools
from oauth2client.client import OAuth2WebServerFlow
from oauth2client.file import Storage
import datetime #from datetime import datetime, timezone
import dateutil.tz

def authenticate() -> httplib2.Http:
# You need a Google project and credentials.  Check out
app_keys = {
"APP_CLIENT_ID": blah,
"APP_CLIENT_SECRET": moreblah
}
storage = Storage(USER_OAUTH_DATA_FILE)
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = tools.run_flow(
OAuth2WebServerFlow(
client_id=app_keys['APP_CLIENT_ID'],
client_secret=app_keys['APP_CLIENT_SECRET'],
],
storage,
)
auth_http = credentials.authorize(httplib2.Http())
return auth_http

calendarVersion = "WRP / /WebCalendar/calendar_190310.14_p4"
'content-type': 'application/json+protobuf',
}
def printReminder(auth_http, aReminderId):
request = {"1":{"4":calendarVersion},"2":[{"2":aReminderId}]}
response, content = auth_http.request(
method='POST',
body=json.dumps(request),
)
assert(response.status == 200)
print(obj)

def main():
auth_http = authenticate()

# https://developers.google.com/oauthplayground/ for figuring out scopes and apis using the scopes
response, content = auth_http.request(
method='GET',
)
assert response.status == 200, (response, content)
email = obj['email']
print('Got email', email)

jreminderLabelId = None
if not jreminderLabelId:
response, content = auth_http.request(
method='GET',
)
assert(response.status == 200)
labels = obj["labels"]
jreminderLabelId = None
for labelInfo in labels:
if labelInfo["name"] == "jreminder":
jreminderLabelId = labelInfo["id"]
break
assert jreminderLabelId
print('Got jreminder label id', jreminderLabelId)

reminderCalendarId = None
if not reminderCalendarId:
response, content = auth_http.request(
method='GET',
)
assert(response.status == 200)
calendars = obj["items"]
for aCal in calendars:
#print(aCal["id"], aCal["summary"])
if aCal["summary"] == "jReminders":
reminderCalendarId = aCal["id"]
break
assert reminderCalendarId
print('Got jReminders calendar id', reminderCalendarId)

baseListRequest = {
"1": { "4": calendarVersion},
"2": [{"1":3},{"1":16},{"1":1},{"1":8},{"1":11},{"1":5},{"1":6},{"1":13},{"1":4},{"1":12},{"1":7},{"1":17}],
# 3: due_before_ms
# 4: due_after_ms
"5": 0, # include_archived      Reminder status (0: Incomplete only / 1: including completion)
"6": 20, # max_results      limit
# 7: continuation_token
# 9: include_completed
# 10: include_deleted
# 12: recurrence_id
# 13: recurrence_options
# 14:  continuation
# 15: exclude_due_before_ms
# 16: exclude_due_after_ms
# 18: require_snoozed
# 19: require_included_in_bigtop
# 20: include_email_reminders
# 21: project_id
# 22: require_excluded_in_bigtop
# 23: raw_query
#"24" archived_before_ms
#"25" archived_after_ms
# 26: exclude_archived_before_ms
# 27: exclude_archived_after_ms
# 28: utc_due_before_ms
# 29: utc_due_after_ms
# 30: exclude_utc_due_before_ms
# 31: exclude_utc_due_after_ms
}
#print(content)
# "1"."2": reinder id
# 2: maybe what app created it?
# 3: reminder text
#                                         yr   month   day         24hr   min  sec?        unix ms
# 5: reminder snooze expired time : {"1":2019,"2":3,"3":16,"4":{"1":14,"2":6,"3":0},"7":"1552759560000"}
# 8: done=1 ?
# 10: 1 (deleted?)
# 11: done time?
# 13: ?
# 16: something about repeat frequency?
# 17: extra bonus info?
# 18: creation ms
# 23: time ms right before creation
print('Getting past reminders')
pastListRequest = copy.deepcopy(baseListRequest)
pastListRequest["16"] = str(int(time.time()) + 0 * 24 * 3600) + "000" # time msec
response, content = auth_http.request(
method='POST',
body=json.dumps(pastListRequest),
)
assert response.status == 200, (response, content)
reminders = obj["1"] if "1" in obj else []
for aReminder in reminders:
aReminderId = aReminder["1"]["2"]
print(time.strftime('%Y-%m-%d', time.localtime(int(aReminder["18"])/1000)), aReminderId, aReminder["3"])
print(aReminder)
# Reminder is not marked done
assert "8" not in aReminder
assert "11" not in aReminder
#assert aReminder["2"]["1"] == 1 # reminder is created in inbox.  Otherwise, unsure if delete will work
isRepeat = "16" in aReminder

if '5' in aReminder:
rTime = aReminder["5"]["7"]
assert rTime and len(rTime) > 6
assert re.match(r'^[0-9]+\$', rTime)
rTime = int(rTime) / 1000
isPast = time.time() > rTime
else:
rTime = int(aReminder['18']) / 1000
assert rTime > 1000000
isPast = time.time() > rTime
assert isPast

if x != 'n':
message = MIMEText(aReminder["3"])
message['To'] = email
message['From'] = email
message['Subject'] = aReminder["3"]
#print (message.as_string())
data = {'raw': base64.urlsafe_b64encode(bytearray(message.as_string(), 'utf-8')).decode('utf-8')}
#print (data)
data["labelIds"] = [ "UNREAD", "INBOX", jreminderLabelId ]
response, content = auth_http.request(
method='POST',
body=json.dumps(data),
)
assert response.status == 200, (response, content)

if x != 'n':
deleteRequest = {"1":{"4": calendarVersion},
"2":[{"2": aReminderId}]}
response, content = auth_http.request(
method='POST',
body=json.dumps(deleteRequest),
)
assert response.status == 200, (response, content)

printReminder(auth_http, aReminderId)

print('Getting future reminders')
futureListRequest = copy.deepcopy(baseListRequest)
#futureListRequest["15"] = str(int(time.time()) + 3 * 365 * 24 * 3600) + "000" # time msec
futureListRequest["16"] = str(int(time.time()) + 30 * 365 * 24 * 3600) + "000" # time msec
response, content = auth_http.request(
method='POST',
body=json.dumps(futureListRequest),
)
assert response.status == 200, (response, content)
reminders = obj["1"] if "1" in obj else []
#print(obj)
for aReminder in reminders:
aReminderId = aReminder["1"]["2"]
print(time.strftime('%Y-%m-%d', time.localtime(int(aReminder["18"])/1000)), aReminderId, aReminder["3"])
print(aReminder)
# Reminder is not marked done
assert "8" not in aReminder
assert "11" not in aReminder
#assert aReminder["2"]["1"] == 1 # reminder is created in inbox.  Otherwise, unsure if delete will work
isRepeat = "16" in aReminder

if '7' in aReminder['5']:
rTime = aReminder["5"]["7"]
assert rTime and len(rTime) > 6
assert re.match(r'^[0-9]+\$', rTime)
rTimeSecs = int(rTime) / 1000
isPast = time.time() > rTimeSecs
assert not isPast

eTime = datetime.datetime.fromtimestamp(rTimeSecs, dateutil.tz.gettz('America/New_York'))
else:
print('Warning, missing unix timestamp, double check date')
eTime = datetime.datetime(aReminder['5']['1'], aReminder['5']['2'], aReminder['5']['3'],
aReminder['5']['4']['1'], aReminder['5']['4']['2'], aReminder['5']['4']['3'],
tzinfo = dateutil.tz.gettz('America/New_York'))

#nowT = datetime.now().astimezone().isoformat(timespec='seconds')
request = {
'start' : {
'dateTime': (eTime).isoformat(timespec='seconds'),
'timeZone': 'America/New_York',
},
'end' : {
'dateTime': (eTime + datetime.timedelta(minutes=15)).isoformat(timespec='seconds'),
'timeZone': 'America/New_York',
},
'description': aReminder["3"], # detailed description
'summary': aReminder["3"], # top line name of event
'reminders': {
'overrides': [ { 'method': 'email', 'minutes': 5 } ],
'useDefault': False
}
}

if isRepeat:
ruleTxt = 'RRULE:'
repeatInfo = aReminder['16']['1']
assert aReminder['5']['4']['1'] == repeatInfo['5']['1']['1']
assert aReminder['5']['4']['2'] == repeatInfo['5']['1']['2']
assert aReminder['5']['4']['3'] == repeatInfo['5']['1']['3']
# time zone issue maybe
#assert eTime.hour == repeatInfo['5']['1']['1'], (eTime.hour, repeatInfo['5']['1']['1'])
assert eTime.minute == repeatInfo['5']['1']['2']
assert eTime.second == repeatInfo['5']['1']['3']
if repeatInfo['1'] == 3:
ruleTxt += 'FREQ=YEARLY'
elif repeatInfo['1'] == 2:
ruleTxt += 'FREQ=MONTHLY'
elif repeatInfo['1'] == 1:
ruleTxt += 'FREQ=WEEKLY'
elif repeatInfo['1'] == 0:
ruleTxt += 'FREQ=DAILY'
else:
assert False
if '2' in repeatInfo:
ruleTxt += ';INTERVAL=' + str(repeatInfo['2'])
if '7' in repeatInfo:
assert repeatInfo['1'] == 2 # monthly
day = repeatInfo['7']['1'][0]
assert day > 0 and day <= 28
ruleTxt += ';BYMONTHDAY=' + str(day)
if '6' in repeatInfo:
assert repeatInfo['1'] == 1 # weekly
weekday = repeatInfo['6']['1'][0]
assert weekday > 0 and weekday < 7
ruleTxt += ';WKST=MO;BYDAY=' + ['MO','TU','WE','TH','FR','SA','SU'][weekday - 1]
if '8' in repeatInfo:
assert repeatInfo['1'] == 3 # yearly
request['recurrence'] = [ ruleTxt ] # [ 'RRULE:FREQ=DAILY;INTERVAL=3' ]
#nowT = datetime.datetime.now()
#nowT = datetime.datetime(nowT.year, nowT.month, nowT.day, 9, 0, 0)
print(request)
x = input('Create calendar event ["n" to skip]')
if x != 'n':
response, content = auth_http.request(
method='POST',
body=json.dumps(request),
)
assert response.status == 200, (response, content)

if isRepeat:
mm = re.match(r'^([^/]+)/([0-9]+)\$', aReminderId)
assert mm
aReminderId = mm.group(1)
subId = aReminder['5']['7']
assert len(subId) > 6
deleteRequest = {"1":{"4": calendarVersion},
"2":{"1": aReminderId},
"4":{ "1":1, "2": 0, "3": subId }
}
else:
deleteRequest = {"1":{"4": calendarVersion},
"2":[{"2": aReminderId}]}
print(deleteRequest)
#print(json.dumps(deleteRequest))
x = input('Delete future reminder')
response, content = auth_http.request(
uri=delUrl,
method='POST',
body=json.dumps(deleteRequest),
)
assert response.status == 200, (response, content)

if __name__ == '__main__':
main()
```

## Wednesday, May 30, 2018

### IP to country mapping in under 1MB (node js)

Hi all,
I recently wrote some node js code to do IP to country code mapping that uses less than 1MB of RAM for data storage. It also uses the ARIN registry data directly and so doesn't require attribution, unlike some of the commercial sources out there. Because it takes up little memory, it's feasible to keep the table in-memory on my tiny web server, which means I avoid any RPC costs associated with IP lookup.

The gist is to first pull a copy of the ARIN IP registry data:

```wget ftp://ftp.arin.net/pub/stats/arin/delegated-arin-extended-latest
wget ftp://ftp.ripe.net/pub/stats/ripencc/delegated-ripencc-latest
wget ftp://ftp.lacnic.net/pub/stats/lacnic/delegated-lacnic-latest
wget ftp://ftp.apnic.net/pub/stats/apnic/delegated-apnic-latest
wget ftp://ftp.afrinic.net/pub/stats/afrinic/delegated-afrinic-latest
```

Then do a bit of filtering and sorting:

```cat delegated-* | grep '|ipv4' | sort -t '|' -k 4,4 -k 5 -V > ./comb3
cat delegated-* | grep '|ipv6' | sort -t '|' -k 4,4 -k 5 -V > ./comb3.ipv6
```

Then build the memory structures and serialize them to disk. I handle IPv4 and IPv6 differently. Both use the raw Uint8Array type.

### IPv4

The ARIN data specifies IP address ranges as an IP followed by a length, and the length is not necessarily a power of two. Since IP address are pretty densely allocated, the datastructure I use is a 65kB array mapping the first two numbers in an IP address to a run-length-coded list of (range + countryCode) blocks and gaps between ranges. Countrycode takes up one byte and the range or gap length is one byte (storing log of the length), plus a bunch of special cases for non-power-of-two lengths and so forth.

### IPv6

ARIN specifies IPv6 ranges in CIDR notation, so it's a prefix followed by the number of significant bits in the prefix. Because of that and because the IPv6 range is very sparse, I use a simple trie with one level for each IP group (hextet, e.g. "2a0a"). Each level is a hash table with a 2-byte key (for the hextet) and a 3-byte value. The value is either one byte of CIDR routing prefix (e.g. the '20' in '/20') followed by two bytes of country code, or it is 3-bytes of offset indicating the location of the next level down in the trie. There's enough extra space in there to have a special bit to indicate an empty entry.

Because this data structure is read-only in production use, each hash table is resized to something like 1.3 times the number of entries.

### Space usage and performacne

As of May 30, 2018, the ipv4 table is 634kB and the ipv6 table is 302kB.

With some dumb benchmarking, it looks like lookup is about 30us for either of ipv4 or ipv6. I will note that there are certain ipv4 blocks where the linked list gets long. I think I noted a max length of 512 entries for one, so user beware. One could probably add some skip-list-like indexes to the longer lists to limit the worst-case lookup time, or better yet, use the fact that most of the time ranges are power-of-two length so probably a list is not the best representation.

The node source code for all this is about 600 lines. I could probably open-source it if there's enough interest.

## Wednesday, May 9, 2018

### Testing old Safari browser (6.1.6) on current Linux / Ubuntu 18.04

I run Ubuntu Bionic 18.04, and I recently needed to test my website on Safari 6.1.6, released back in 2014. The closest I got was by installing an old version of the Ephiphany/Web browser, since both Epiphany and Safari are based on the Webkit browser engine.  The steps I followed, after much trial and error were:
1. Find release date of desired Safari version looking at the wiki page.  Safari 6.1.6 was released around August, 2014.
2. Find a version of the epiphany-browser package from a similar date by browsing the debian package history  If you click on the links there it'll show the date the package appeared.  epiphany-browser 3.12.1-1 is from around July, 2014.
3. Look in the Ubuntu history of the epiphany package to see if there's an Ubuntu release that has a version close to what you want.  I'm currently running Bionic, and saw the Trusty has a close version, 3.10.3-0ubuntu2.
```deb http://us.archive.ubuntu.com/ubuntu trusty universe
deb http://us.archive.ubuntu.com/ubuntu trusty multiverse
deb http://security.ubuntu.com/ubuntu trusty-security main
deb http://cz.archive.ubuntu.com/ubuntu trusty main
```
5. The run some apt command to update and get the correct version.  I found the right packages to include by trial and error as the apt-get install command would fail:
```apt-get remove epiphany-browser epiphany-browser-data
apt-get update
sudo apt-get install epiphany-browser=3.10.3-0ubuntu2 epiphany-browser-data=3.10.3-0ubuntu2 libwebkit2gtk-3.0-25 libjavascriptcoregtk-3.0-0=2.4.10-0ubuntu0.14.04.1
sudo apt-mark hold epiphany-browser epiphany-browser-data
```
And voila! epiphany-browser was available for testing / debugging.
My guess is that it's not a good idea to leave the trusty source lines in sources.list long-term since it might confuse future upgrades and may also confuse apt-cache. I plan to remove and try to cleanup once I'm done testing.

## Friday, September 29, 2017

### The optimal shape of a kitchen measuring cup

Four days ago I launched Euclid, more accurate measuring cup on Kickstarter, and it just crossed 100% of it’s fund-raising goal, which is amazing. The future has changed from *if* this idea ever becomes a real thing to “It’s going to be real!”

Euclid’s shape is computationally generated, based on math resulting from a neat geometry insight. I quit my software engineering job four years ago to create it, and though it emerged from custom software, the cup itself is not digital in any way. No bluetooth, app or touchscreen. The Kickstarter page has a summary of the idea. Here I wanted to add a bit more detail about the project and how it came about.

My background is as a infrastructure software engineer. I was at Google and Facebook for 10 years building a number of distributed systems including GFS, Sibyl and Configerator.

I also love math and tinkering with things. Growing up, I spent a lot of (mostly voluntary) time in the basement, screwing around doing dumb stuff. What ever anyone says, it is a terrible idea to pull the spark plug wire off a running lawn mower with your bare hands.

One day I was baking in the kitchen and had a 2-cup measuring cup out and the recipe called for 1/4-cup. I felt like I should switch to a smaller measuring cup and for whatever reason stopped to wonder why. Why do smaller measuring cups seem better at measuring small amounts than larger measuring cups?

I realized it was about accuracy and that clearly defining measurement error would help in reasoning this through. Brace yourself, I’m going to get a bit technical, because understanding the problem was the key to solving it.

## The problem

Let’s consider measurement error as the vertical distance between the target measurement line and the true height of the liquid. For example, when measuring 1-cup, you may think you hit the target line, but actually you overshot by 1mm because the measuring cup was not quite at eye level. Other possible reasons for missing the line include liquid sloshing, hand shaking, and the coriolis force (ok I’m kidding about that last one… I think).

The graphic below shows an example of overshooting by 1mm:

The amount of extra liquid in the cup equals 1mm times the surface area of the liquid. Divide that by the target volume, and you have your fractional error (e.g., 5%).

If we assume that over-shooting (or under-shooting) happens just as easily at the top of a measuring cup as at the bottom, then that means the extra liquid height (1mm in the graphic) will be roughly the same at the top and the bottom.

Looking at the equation in the graphic, this means that measurement error depends primarily on the ratio of surface area to volume at the target line. Let’s call that the S/V ratio.

Here are a few interesting implications that may correspond to your intuition:

• Narrow measuring cups are more accurate than wide measuring cups.
• Small-capacity measuring cups are better at measuring small volumes because they are narrower. This answers the conundrum that started this whole quest.
• Cylindrical measuring cups measure smaller amounts less accurately than larger amounts. Why? Because the S/V ratio is larger at smaller amounts. This is why it’s hard to measure ¼-cup accurately in a 2-cup measuring cup. It’s also why larger measuring cups don’t have lines for ⅛-cup or 1-tablespoon or 1-teaspoon.
• Many measuring cups have sloped sides or are conical. All of these shapes have the same problem. The S/V ratio increases for smaller amounts, though not as quickly as in a cylindrical cup.
This animation shows the relationship between shape and error:

A natural question is, if narrow means more accurate, then why not use test tubes for measuring everything? Well, a test tube that holds 2-cups would be 10 feet high. I’m not sure about your cupboards, but 10-feet won’t fit in my cupboards, even if I rearrange things. But even if you bit the bullet and remodeled the kitchen to include a 10-foot high cupboard, the test tube would still be hard to use. A large fraction of whatever you’re measuring might end up stuck to the sides of the tube because there’s so much tube. So practical measuring cup design trades off accuracy for convenience.

## Designing for the S/V ratio

We just observed that the ratio of surface area to volume (S/V ratio) is important to measuring accuracy, and that measuring cups today are less accurate measuring smaller amounts because the S/V ratio is larger for smaller amounts.

That suggests that a solution would be a measuring cup shaped so that the S/V ratio is the same at every line marking. That would make it just as accurate measuring small amounts as large amounts. More precisely, it would be optimal in the sense that it minimizes the variance of error across all measurement amounts, for a chef who can get liquid to within k millimeters of each measurement line, for some constant k that varies with each chef's ability / effort.

Figuring out what kind of shape would have this property involves math. I started with some simplifying assumptions, like assuming the measuring cup is circular. It still was tricky and took me a number of months working on weekends. The math is not as complicated as it might sound. If I were honest, I’d say I didn’t know what I was doing initially and it took me a while to think about the problem the right way. But I’m not sure I’m ready to admit that :).

Below is an animation of how the surface area and volume changes.  The Kickstarter page has a more detailed description of how the solution works. Below, I’ll talk about the process a bit.

## The first design

There are many potential designs that have a constant S/V ratio. As I was developing the equations, I started playing with visualizations to understand the design space, first 2D via javascript and SVG, and then 3D using Blender, an open-source modelling program. Blender was cool partly because it has a python API, which made it possible to programmatically generate curves and surfaces that have the right geometry. The first rendering I have from this time is:

## Bye-bye job

Around this point I left Facebook to work on this full-time. I knew I wanted to try starting some kind of company, I loved the measuring cup idea, and I also loved that it was far afield from what software engineers normally start companies around. Part of me liked the quizzical, surprised and somewhat distributed expression people got on their faces when I expressed my intentions.

I thought of the measuring cup somewhat as a passion project and break from SWE, so I jumped into it with no business plan or customer research. Around this time I also incorporated an S-corp, filed patent applications and so forth.

## Designing how to design

I started having industrial designer contractors help design the measuring cup and it quickly became apparent that there was a huge gap. Neither they nor their design software could couple design with the surface area & volume (S/V) geometry constraints in the way we needed.

There were some expensive process failures here. They would design shapes that I didn’t know how to modify to obey the S/V constraints. The scheme I finally came up with was to separate design into three steps. First, the industrial designers created a cross-sectional closed curve of the measuring cup, such as:

I ran custom Python that replicated the curve as a sequence of ribs, positioning and scaling each rib according to the S/V constraints. For example:

----->

While 3D design software is terrible for designing with mathematical constraints, it is excellent for calculating surface area and volume of things. So once I had generated a shape, it was possible to write code to double check that the surface area and volume at every height was indeed what it was supposed to be.

## Prototyping

I went through 25+ physical prototypes and countless 3D models. The bread & butter prototypes were 3D-printed in white plastic, sometimes via someone’s Makerbot, via 3D Hubs, a 3D printing marketplace. A much small number of high quality prototypes, including the one in the video, were milled from a block of PMMA. I wasn’t able to figure out a way to make transparent, food-safe prototypes economically.

The original shapes were circular, such as

But it became apparent that industrial printing techniques could not print accurately on surfaces with compound curvature (i.e., non-zero Gaussian curvature, meaning it curves in two directions). So we switched to the flat-sided design you see on Kickstarter. This design also had a bonus that the markings much easier to read than on both our previous prototypes and most existing measuring cups.

## Manufacturing

Talking to manufacturers was both thrilling and challenging. It was thrilling because they run heavy machinery that takes as input raw materials and usually some software and produces useful physical objects. As a software engineer, my best efforts just push around electrons.

Talking to manufacturers was challenging because the domain knowledge is very different from that of software engineering. For example, there are engineers who specialize in something called Design For Manufacturing (DFM). You might think you come up with a design, prototype, do some testing and then hand it off to a manufacturer. Oh no no, my friend. If you don’t have in mind the technical constraints of manufacturing when you design, you probably are in for a surprise and some redesign work.

Manufacturers of course want business, but there is also a cost to them for talking with you. That includes the time of their in-house engineer to analyze your design, the risk that you might cause problems for them down the line because you’re inexperienced, and of course the opportunity cost of not working with someone who might place a larger order than you intend to.

I generally found manufacturers very polite and willing to both quote my designs and give useful feedback if there were issues.  Through many iterations and discussions with many manufacturers, along with advice from DFM experts, the design was refined to the point where they now don't see any issues.

I’ll save a detailed discussion of manufacturing for another post, but below are two of the bigger limitations to injection molding.  Limitations? To injection molding? Yup.

First, constant wall thickness is important. Injection-molding does not tolerate well large variations in thickness of the part. Partly this is because plastic shrinks as it cools and thicker sections shrink differently than thinner sections. Thick blobs of plastic also can have problems with air bubbles, and thick blobs also increase cost since the part needs to cool in the mold longer, which is time the machine can’t be making the next part.

Second, the way plastic flows when it's being injected matters. The way it works is this. The mold is a block of steel with a cavity into which hot, liquid plastic is squirted under high pressure. The plastic enters through a hole in the mold (called a “port”) and flows to fill the entire mold. As the plastic flows, it cools because the steel is relatively cold. The steel is cool partly to help the part solidify quickly so the machine can move on to make the next part. As the plastic cools, it becomes thicker and starts to harden. One reason the plastic enters under high pressure is so it fills the entire mold before it hardens.

The consequence of this is that sections of the mold that are further from the injection port are going to receive plastic that is cooler and thicker, and so the shape that the plastic can form is more limited. Also, shrinkage of plastic that is farther from the injection port differs from that of plastic closer to the injection port.

Luckily there are great software flow analysis tools out there that simulate how the plastic fills a mold and can predict what sort of issues might arise.

## The rest

I skipped over a lot of detail about manufacturing and haven't even talking about printing and industrial inks.  Nor have I talked about selecting a specific plastic and the range of material properties of plastics. Stay tuned or ask questions...

## Friday, February 17, 2017

### Converting video to high-quality gif on linux (via ffmpeg)

A brief post today. Recently, I've been taking videos of my screen and converting them to animated gifs.
The command I use to record the video, for example to record 15 seconds of video at 25 fps, sized 830x150 pixels at offset from the top of x=230,y=360 is:
```avconv -f x11grab -y -r 25 -s 830x150 -i :0.0+230,360 -vcodec libx264 -crf 0 -threads 4 -t 15 myvideo.mp4
```

For a while I was using a one-liner to convert the video to animated gifs using ffmpeg and ImageMagick's convert command based on this post. Below it samples the mp4 at 15 fps.

```ffmpeg -i myvideo.mp4 -r 15 -f image2pipe -vcodec ppm - | convert -delay 7 -loop 0 - gif:- | convert -layers Optimize - myvideo.gif
```

The problem I ran into was the the color palette choice in the gifs was poor and creating weird artifacts.

So I came across http://blog.pkh.me/p/21-high-quality-gif-with-ffmpeg.html, which layed out another strategy, resulting in the following two commands:

```ffmpeg -v warning -i myvideo.mp4 -vf "fps=25,palettegen" -y /tmp/pallette.png
ffmpeg -v warning -i myvideo.mp4 -i /tmp/pallette.png -lavfi "fps=25 [x]; [x][1:v] paletteuse" -y myvideo.gif

or, if you don't need to change the frame rate:

ffmpeg -v warning -i myvideo.mp4 -vf "palettegen" -y /tmp/pallette.png
ffmpeg -v warning -i myvideo.mp4 -i /tmp/pallette.png -lavfi paletteuse -y myvideo.gif
```

I found this approach much better. The color artifacts were gone and also the file sizes were much smaller! It chopped about 50-80% off of the file size compared to the previous ffmpeg/convert pipeline.