Further reading
Statistics of deadly quarrels by Richardson, 1975. ISBN:0910286108
Zipf's law does not just apply to wordfrequencies. It's been found to apply to city sizes, income distributions, social networks, and a variety of contexts.
One common form of data we might encounter in a study is what people call a "bag" or "multiset". A bag is like a set in that the elements have no intrinsic order, but a bag differs from a set in the one important way. In a set an element is either present or absent, so when we list the elements, each thing in the set is listed only once. In a bag, each element can occur once, twice, or any arbitrary number of times, and two bags are equal if and only if each element occurs the same number of times in both. Bags are such a natural idea that they have been used in mathematics for centuries even though they weren't formally recognized and named until the 1970's. Here are some examples of bags.
Often, rather than list out all the elements of a multiset repeatedly, we can save space by listing each element along with an integer representing how many times that element appears (it's multiplicity).
So the bag \[\{ a,a,a,f,b,b,a,a \}\] might be written \[\begin{gather*}
\{ (a,5), (b,2), (f,1) \} \quad\text{or just}\quad a^5 b^2 f
\end{gather*}\] In computations, vectors are often used to represent bags.
The commonness of words in the King James bible is an easily accessible example, as the data can be found online.
[ Data : hide , shown as table , shown as CSV shown as QR ]
# https://www.kingjamesbibleonline.org/PopularBibleWords.php
# times words show up in the King James version of the Bible, ranked from most to least (top 250)
28364
28269
21257
12044
11683
11285
9022
8684
8326
7582
7365
7127
6796
6664
6638
6579
6568
6442
6073
5973
5900
5606
5431
4563
4487
4432
4369
4293
3981
3679
3608
3603
3581
3478
3441
3410
3307
3237
3162
3058
2952
2937
2902
2834
2828
2747
2634
2606
2551
2509
2496
2271
2251
2210
2210
2165
2154
2145
2124
2078
2042
1990
1980
1953
1881
1874
1860
1840
1760
1759
1727
1692
1658
1656
1648
1641
1609
1575
1547
1544
1526
1488
1487
1477
1468
1438
1419
1408
1396
1392
1381
1358
1329
1306
1296
1255
1254
1239
1207
1189
1164
1163
1122
1088
1084
1080
1070
1061
1049
1046
1024
1020
1012
992
989
978
977
977
977
974
965
959
956
953
953
944
934
925
923
923
894
881
878
873
861
859
835
834
831
819
819
817
804
798
793
793
787
785
778
774
770
756
737
734
727
726
716
714
700
699
699
671
659
657
648
644
641
640
639
633
633
631
621
617
610
609
604
604
603
601
599
591
590
584
578
571
565
554
553
551
550
550
548
544
544
540
538
533
528
528
526
524
523
523
511
503
503
501
501
498
496
496
495
490
487
483
482
480
475
470
470
463
463
462
458
456
452
450
443
442
441
440
433
426
424
424
423
423
415
414
413
410
405
402
401
399
399
399
398
398
You might think "So what? What can we hope to do with a data set of number of times each word is used? Without the order, the words are nearly meaningless random jumble." That's true, in sense that the order is key in communicating meaning. But in another way, it turns out there is atleast one somewhat surprising thing lurking in these data.
Words are unequal. Some words are used much more commonly than others  "the" and "and" are used all at the time, while "gerrymander" and "upholster" are used rarely, and only in specific contexts. One might argue that the linguistic importance of a word is predicted by how often it is used relative to other words. One way to represent this is to plot the empirical CDF of word usage. To make an empirical CDF, make a list of the number of times each word appears (but forget about the words themselves). Sort numbers from smallest (most rarely used) to largest (most commonly used), and plot the value as a function of normalized rank, i.e.,
\[\mathcal{F} _ n(t)=\frac{\textrm{num of values }\leq t}{\textrm{num of samples, }n}\] where \(n\) is the total number of different words,
plot(value, rank/float(n))
This picture shows that there are lots of words that are rather uncommon. There are almost 800,000 words in the bible. Half of those words occur less than 2000 times each, while a few words occur more than 10,000 times each. But this picture doesn't reveal much more of a pattern than that. A better picture can be made by ranking all the words, in order from most used to least used, and then plotting the frequency of each word's use as a function of it's rank. These are called rankorder statistics. Rank order statistics are particularly useful if you don't know the denominator. What we'd need for the empirical CDF, above, is frequency, but we don't know how many words there are in the Bible. (OK, there are 783,137 words, but in many contexts, the denominator is uncertain or inconvenient to obtain. Recall, for example, our analysis of telephone line loading.) If plotted on a loglog plot, they reveal a surprising pattern.
n = len(values)
values.sort().reverse()
rank = range(1,n+1)
subplot(2,1,1)
plot(rank, value, 'bo')
subplot(2,1,2)
loglog(rank, value, 'go')
Notice that the rankorder statistic is a straight line on loglog plot! This indicates a power law relationship between rank and abundance, \(y=ax^{b}\). Using linear least squares or polyfit or whatever you prefer, we find the elasticity \(b \approx 0.93\).
The predictive power here is a function of rank. We predict that in the Bible, the \(1000^{\textrm{th}}\) most common word will appear roughly 129 times. This phenomena is an example of Zipf's law: given some body of written work, the frequency of any word is inversely proportional to its rank. Today, Zipf's law is used to refer to any discrete powerlaw distribution of ranked data: \[f(r;b) \propto 1/r^b.\] The associated probability generating function is \[\hat{r}(s) = \frac{1}{\zeta(b)} \sum _ {r=1}^{\infty} \frac{s^b}{r^b}\] where \(\zeta(b)\) is the Riemann zetafunction.
The normalization is, on the one hand, limiting: without normalizing we were able to extrapolate to less frequent words not observed. However with the normalization we can compare with other works. For example, word frequency in "A Connecticut Yankee in King Arthur's Court" by Mark Twain:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 

They look pretty close! In the case of A Connecticut Yankee, word frequency in the first 250 words is only slightly different, approximately \(f(r)\propto r^{1}\). (Fun fact: In both cases, the top two words are 'and' and 'the', but the order is switched! It's 'and' first in the Bible and 'the' first in Twain's book).
Warning: Be careful of the interpretation. The empirical CDF at the start of lecture, with words on the \(x\)axis, gave us the probability that a word chosen from a list of unique words appearing in the Bible less than \(n\) times.
Zipf's law is a closely related to the Pareto distribution and powerlaw distributions. These are important particularly because the way a distribution's tail thins out can have important consequences in many theories.
One proposal to explain the commonness of data fit well by Zipf's law is that there is a network underlying these data and that this network has selfsimilar properties reflected by powerlaw behaviors.
The more connections you have, the faster you make new connections.
Consider a network where each node represents a person and an edge between two nodes represents a "friendship" between two people. What happens to the network as new people join? When someone new joins the network, they are represented by a new node corresponding to this person. And let's suppose that the new person immediately adds \(m\) relationships to distinct existing network members.
Let's let \(N(k,t)\) be the expected number of nodes of degree \(k\) (i.e. person connected to \(k\) others) after \(t\) new people have joined the network. The new subscriber is more likely to find friends that are wellconnected. So, suppose the new member's relationships are essentially draw at random from all the other connections among members. If the network initially had \(T _ 0\) connections, then will be \(T(t) = T _ 0 + 2mt\) connections total out there after \(t\) new people have joined, since \(t\) people each add \(m\) relationship edges, and each edge is connected to two individuals. A degree \(k\)individual (person with \(k\) friends) has \(k\) connections, so the chance that that individual is picked by the new subscriber will be \(k/(2mt + T _ 0)\).
The precise process for the evolution of \(N(k,t)\) requires finding the expectation of \(P(x,k,m)\) where \(P\), the probability that \(x\) nodes of degree \(k\) are picked after \(m\) draws of edge connections, with replacement, is determined by the difference equation \[\begin{gather*} P(x+1,k,s+1) = \left( \frac{k N(k)k x }{T} \right) P(x,k,s) + \left(1  \frac{k (N(k)  (x+1))}{T} \right) P(x+1,k,s), \\ P(0,k,0) = 1, \quad P(x>0,k,0) = 0, \quad P(x > N(k),k,s) = 0. \end{gather*}\]But when the number of connections is large enough (\(T \gg k N(k)\), \(T\gg m\)) and there are many people (\(N(k) \gg m\)), then, on average, \(m k N(k,t) /(2mt + T _ 0)\) people with \(k\) connections gain a new relationship and now have \(k+1\) connections.
If we apply this to actors of all degree, taking \(T _ 0 = 0\), we expect \(N(k,t)\) to obey the following difference equation as each new movie is produced. \[\begin{gather*} N(k,t+1) = N(k,t) + \frac{ m (k1)}{2 m t} N(k1, t)  \frac{ m k}{2 m t} N(k, t) + 1_{k = m} \end{gather*}\] If \(k > m\), then we can simplify this to the difference equation \[N(k, t+1)  N(k, t) = \frac{(k1)}{2t} N(k1, t)  \frac{k}{2 t} N(k,t).\] At the lower boundary, where \(k = m\), then we have the slightly simplier equation
\[N(m,t+1)  N(m,t) = 1  \frac{mN(m,t)}{2t}.\]
We can't get \(k < m\) since everyone starts with \(m\) connections! This should make us throw our assumption of \(T _ 0 = 0\) kindof out the window, but I'll leave it as something for you to ponder.
So, we now have a system of linear recurrence equations, where the recurrences occur in two variables: the degree \(k\) of nodes in our network, and the number of movies \(t\) made so far. The variable \(t\) does not appear in our coefficients, but the variable \(k\) does. Thus, our recurrence system is similar to a linear, variable coefficient, partial differential equation. Variable coefficient differential equations rarely have solutions that can be expressed without recourse to special functions, and so we don't have much prior reason to be optimistic here. But, it turns out that if we try the ansatz \[N(k,t) = \frac{Ct}{k(k+1)(k+2)}\] we discover that it works provided \(C = 2 m (m+1)\). And so, we discover the solution \[N(k,t) = \frac{2 m t \left(m + 1\right)}{k \left(k + 1\right) \left(k + 2\right)}\]
That's the average number of people with \(k\) friends. For \(m=10\),Sympy code to help with solving:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 

This is a scale free relationship, in the sense that the curve is approximately selfsimilar for a range of observations  for large \(k\), \(N(k,t) \sim k^{3}\).
From this calculation, we can compute the number of friends the average person will have: \[\textrm{num of friends }=\frac{\sum_{k=m}^\infty k N(k,t)}{\sum_{k=m}^\infty N(k,t)}=2m.\]
And the fraction of people who have less than the average number is \[\frac{\sum_{k=m}^{2m1}N(k,t)}{\sum_{k=m}^\infty N(k,t)}=\frac{(3m+1)}{2(2m+1)}\geq\frac{2}{3}.\]
(since \(2/3\leq (3m+1)/(2(2m+1))<3/4\) for finite \(m\geq 1\)). But that means, more than half the people have less than the average number of friends!
That's why your friends probably have more friends than you do...
Another important form of Zipf's law is the citation patterns found in scientific articles. Rather than doing equations, let's make a simulation of a simplified version of this.
Start with a list containing the integers 1 and 2.
Choose one number randomly from the list and append it back to the list. You can use the sample()'' function from the
random'' module to do this.
Append the next number (3) to the list.
Repeat these last two steps 1,000 times, incrementing the new number each time until you get a list of 2,002 integers.
Now, count up how many times each integer appears in the list and plot a rankvalue plot on a loglog scale. Explain how we know that the plot exhibits a powerlaw scaling, and approximate the scaling exponent.