-
Notifications
You must be signed in to change notification settings - Fork 8
Expand file tree
/
Copy pathfeed.xml
More file actions
762 lines (707 loc) · 54.6 KB
/
feed.xml
File metadata and controls
762 lines (707 loc) · 54.6 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title><![CDATA[Comfortably Numbered]]></title>
<description><![CDATA[My blog.]]></description>
<link>https://hardmath123.github.io</link>

<generator>RSS for Node</generator>
<lastBuildDate>Sun, 04 Jan 2026 06:05:27 GMT</lastBuildDate>
<atom:link href="https://hardmath123.github.io/feed.xml" rel="self" type="application/rss+xml"/>
<author><![CDATA[Hardmath123]]></author>
<language><![CDATA[en]]></language>
<item>
<title><![CDATA[Intersaccadic Perception on Massachusetts Avenue]]></title>
<description><![CDATA[<p>Seeing without looking</p>
<p>On the crispest, sunniest mornings, Massachusetts Avenue shines as if made of flowing gold. When I bike to work from Harvard to MIT, I ride straight into the sun: squinting into the brightness, holding up my gloved hand to shade my eyes. When I come to traffic lights I look down, dazed, and wait for my vision to adjust. I stare at my bike’s front wheel through a hole in my basket. Even the dusty tire blazes in the sunlight.</p>
<p>Sometimes, these moments feel almost unbearably beautiful to me: the street, the wheel, the tire-tread, the braided pattern of grooves burnt in by piercing morning rays.</p>
<p><img src="./static/intersaccadic.png" alt="Picture of my bicycle's tire"></p>
<p>When the light turns green, I only know it because the trucks around me start moving. I begin pedaling and my front wheel spins up. The tire-treads blur into streaks, impossible to resolve with my eyes. I look cautiously back up at the road.</p>
<p>But then a strange thing happens. When I look up, I think I see a flash of the tire-treads in my vision: the pattern of the grooves is sharp, but phantasmic, like an afterimage. I look down at the wheel—still a blur. I look up at the road—there’s that flash again.</p>
<p>What is going on?</p>
<p>It takes me a few blocks to form a theory. Here it is: when my eyes saccade from wheel to road, they speed up and slow down in their sockets. If the saccade is fast enough, then at some point—by the intermediate value theorem—the motion of my retinas must coincide perfectly with the motion of the image of the wheel, leaving a clear-but-fleeting impression of the treads in my mind. The flash I am seeing is vision between vision.</p>
<p>Is this theory plausible? I park the bike, I do some envelope math. Relative to me, the top of the wheel should move at the same speed as the bicycle does relative to the ground. I bike at around 15mph, and my wheel’s rim is about a yard from my eyes. By a speed-to-angular-velocity calculation, that gives the bicycle an angular velocity of about 200°/s on my retina. Reading off the <a href="https://www.sciencedirect.com/science/article/pii/0025556475900759">“main sequence”</a>, it turns out that saccades that reach a peak velocity of 200º/s are on the order of 5º of travel. That seems to me like a plausible measure of my wheel-to-road saccade, so at an order-of-magnitude level, at least, I would say this theory checks out.</p>
<p>Okay, (I start to worry), but is it really possible to perceive anything at all during a saccade? What about “saccadic suppression”? But it turns out that <a href="https://peerj.com/articles/1150/">we really <em>can</em> perceive during our saccades</a>, and indeed this type of “intersaccadic percepetion” is tested with stimuli not unlike my bicycle wheel. And what about the sun (I continue worrying)? Is the blazing sunlight necessary for this effect? Certainly, I have only ever noticed it on blindingly bright days. But why would that be? Does my theory predict this? I’m not sure yet. Perhaps in dimmer conditions, my pupils are slightly dilated, and so my lenses need to do more work to refocus from the wheel to the (further-away) street. This would interfere with the formation of the sharp intersaccadic percept. It might also simply be the case that only in bright sunlight is the contrast between tire and groove severe enough for the effect to really stand out.</p>
<p>I have more concerns. But now I’m in the elevator, and late, and it is time to get on with the day.</p>
<hr>
<p>I worry a lot about what, if anything, I am learning in graduate school. It is true that at some level, my PhD program is vocational: I am learning to do science, I am preparing for a career as a researcher. I have no doubt that I am gaining those skills (or at the very least, that very gifted teachers are trying their best on me).</p>
<p>But I came to graduate school for more than that, I think. I want to emerge with some better understanding of myself: of what I am and how I work, of what it means to be human, of what the great mysteries of our world are. I want this understanding to serve me no matter what discipline or career I choose for myself. I really did “enter to grow in wisdom.”</p>
<p>In the maelstorm of everyday life at MIT, it is very difficult to say, at the end of a long week or semester, what exactly you have learned, how exactly you have grown. If anything, graduates school is an endeavor of discovering more and more things you do <em>not</em> understand.</p>
<p>These moments on bicycles, then, are precious to me. Strange and rare and trivial though they may be, they are all I have to remind me that, yes, I really do see the world differently—more-ly—than before.</p>
<hr>
<p>(Earlier reflections on <em>glimpses:</em> <a href="./peripheral-pareidolia.html">1</a> <a href="./bentley-blizzard-blossoms.html">2</a> <a href="./electric-guitar.html">3</a>.)</p>
]]></description>
<link>https://hardmath123.github.io/intersaccadic.html</link>
<guid isPermaLink="true">https://hardmath123.github.io/intersaccadic.html</guid>
<dc:creator><![CDATA[Hardmath123]]></dc:creator>
<pubDate>Tue, 26 Nov 2024 18:30:00 GMT</pubDate>
</item>
<item>
<title><![CDATA[Play Me a High C]]></title>
<description><![CDATA[<p>The world is like an apple whirring silently through space</p>
<p>After a chance conversation at our lab retreat, I got curious about what tides
“sound” like. I downloaded the NOAA’s <a href="https://tidesandcurrents.noaa.gov/stations.html">water level
data</a> for Boston Harbor from
the start of my PhD until today and sped it up by a factor of about 200
million.</p>
<p>You can clearly hear the tone representing the daily tides (pleasingly, it’s
tuned to “middle C”… get it?). The vibrato or “beating” is caused by the
difference between the solar and lunar constituents, and corresponds to the
semimonthly spring/neap tides. Finally, if you speed it up even more, you can
hear three annual “scrapes” corresponding to yearly variation (caused by the
Earth’s axial tilt?).</p>
<ul>
<li>200 million times faster: <audio controls src="static/harbor.wav"></audio></li>
<li>800 million times faster: <audio controls src="static/harbor-4x.wav"></audio>
</li>
</ul>
]]></description>
<link>https://hardmath123.github.io/tides.html</link>
<guid isPermaLink="true">https://hardmath123.github.io/tides.html</guid>
<dc:creator><![CDATA[Hardmath123]]></dc:creator>
<pubDate>Sun, 03 Nov 2024 18:30:00 GMT</pubDate>
</item>
<item>
<title><![CDATA[Peripheral pareidolia]]></title>
<description><![CDATA[<p>Faces that you can only see without looking</p>
<p>I work in a dense urban campus, and the view from my building looks into the windows of the next building over.</p>
<p><img src="../static/pareidolia-window.png" alt="The view from my building faces the windows of the next building over"></p>
<p>Sometimes, when I meet with my advisor, I sit at a table facing that window. On these days, a strange thing happens. When I look up at my advisor, I get the eerie sensation in the corner of my eye that there is a face in that window watching me. Of course, when I look at the window, the face vanishes—there is no one there.</p>
<p>I have been chalking this up to the paranoid hallucinations of a tired brain. But a chance email from a professor this week made me realize what is actually going on here.</p>
<p>When I look up at my advisor, my eyes bring them into focus. To do this, they converge (“cross”) slightly, so that the image is aligned on my retinas. What happens to the window in the background? Because it is behind my advisor, it <em>diverges</em> slightly: I actually see two copies of it superimposed with a slight horizontal displacement, kind of like a Magic Eye stereogram. Of course, the window also blurs out of focus—in part because it is far behind my eyes’ current focal plane, and in part because it is in my peripheral vision, which has significantly lower acuity.</p>
<p>Here is a video of what happens when you simulate diverging and blurring the window. I’ve tiled the image many times so that you always have a few copies in your peripheral vision.</p>
<video src="../static/pareidolia-video.mp4" style="width: 100%;" controls></video>
<p>Aha! At the end of the simulation, the window-images align perfectly to form a pair of little faces side by side. The light fixtures form the eyes and the nose, and the frame provides the lips and the mouth. The faces are not the clearest, but they are certainly face-like enough to evoke pareidolia.</p>
<p>Of course, if you were to look directly at the window, your eyes would unconverge, the windows would fuse, and the face would seem to disappear. So, <em>it’s not paranoia, it’s peripheral pareidolia!</em></p>
<p>(Food for thought: how can we automatically find/create more examples of “peripheral pareidolia”? Does this effect say something interesting about the organization of our visual system?)</p>
]]></description>
<link>https://hardmath123.github.io/peripheral-pareidolia.html</link>
<guid isPermaLink="true">https://hardmath123.github.io/peripheral-pareidolia.html</guid>
<dc:creator><![CDATA[Hardmath123]]></dc:creator>
<pubDate>Mon, 29 Apr 2024 18:30:00 GMT</pubDate>
</item>
<item>
<title><![CDATA[Undetectable Bayesian Improv Theater]]></title>
<description><![CDATA[<p>How to pretend like you have a biased coin</p>
<p>Suppose two Bayesians, Alice and Bob, put on a variety show where they take
turns tossing a biased coin and announcing outcomes to a live studio audience.
(Bayesians love this kind of thing—it keeps them entertained for hours…)</p>
<p>Unfortunately, just as Alice goes on stage, she realizes with dread that she
forgot to bring the coin. Thinking on her feet, she mimes pulling a tiny
imaginary coin out of her pocket, and says “This is a biased coin!” It
works—the audience buys it and the crowd goes wild.</p>
<p>She mimes tossing the pretend coin and randomly announces “heads” or “tails.”
Then, she hands the coin to Bob, who (catching on) also mimes a toss. This has
just turned into a Bayesian improv show.</p>
<p>But now Bob has a problem. Should he announce “heads” or “tails”? He could
choose uniformly at random, but after many rounds the audience might get
suspicious if the coin’s bias is too close to 50%. How can he keep up the
charade of a <em>biased</em> coin?</p>
<p>Here’s what Bob does. In the spirit of “yes, and…,” he infers the coin’s bias
based on Alice’s reported outcome (say, with a uniform prior) and samples a
fresh outcome with that bias. So if Alice said “heads,” Bob would be a bit
likelier to say “heads” as well.</p>
<p>Then Alice takes the coin back and does the same, freshly inferring the coin’s
bias from the past <em>two</em> tosses. In this way, the two actors take turns
announcing simulated outcomes to the oblivious audience, while building a
shared understanding of the coin’s bias.</p>
<p>What happens? How can we characterize the sequence of outcomes? Intuitively, we
might expect either a “rich-get-richer” effect where they end up repeating
heads or tails. Or we might expect a “regression-to-the-mean” where they
converge to simulating a fair coin.</p>
<p>The surprising answer is that this process is indistinguishable from Alice and
Bob tossing a <em>real</em> coin with fixed bias (chosen uniformly). A critic lurking
in the audience would never suspect something afoot!</p>
<p>This result is a consequence of the correspondence between the Pólya
distribution and the Beta-binomial distribution.</p>
<p>I have a hunch that this observation could be useful: perhaps in designing a
new kind of cryptographic protocol, or perhaps in explaining something about
human cognition. If you have ideas, let me know!</p>
<hr>
<p>Proof sketch: Model the actors’ belief with a Beta distribution with parameters
$(h, t)$ initialized to $(1, 1)$, i.e. uniform. At each toss the probability of
heads is given by $h/(h+t)$, and the outcome increments $h$ or $t$ by 1. You
can think of this as a Pólya urn with h black and t white balls: each time you
draw a ball, you put it back and add a “bonus” ball of the same color. It is
well-known (look
<a href="https://djalil.chafai.net/blog/2015/11/30/back-to-basics-polya-urns/">here</a> or
<a href="https://math.uchicago.edu/~may/REU2013/REUPapers/Helfand.pdf">here</a> or
<a href="https://www.randomservices.org/random/bernoulli/BetaBernoulli.html">here</a>)
that this is the same as the Beta-binomial process.</p>
<blockquote>
<p>See also: <a href="https://a.exozy.me/posts/asian-bayesian-2/">cool new blog post that riffs on these
ideas</a></p>
</blockquote>
]]></description>
<link>https://hardmath123.github.io/bayesian-improv.html</link>
<guid isPermaLink="true">https://hardmath123.github.io/bayesian-improv.html</guid>
<dc:creator><![CDATA[Hardmath123]]></dc:creator>
<pubDate>Sat, 20 Jan 2024 18:30:00 GMT</pubDate>
</item>
<item>
<title><![CDATA[The light tries to enter the long black branches]]></title>
<description><![CDATA[<p>Thinking about a streetlight’s wooden halo</p>
<p>Have you noticed the way streetlights shine through the bare branches of trees
on cold winter nights? Walking home from work tonight I was struck by the
almost overwhelming beauty of the scene. The light creates a perfect,
glittering halo around itself, like the moon in a Van Gogh painting. The tree
in turn all but reaches its fingers out to grasp the light. If you move your
head from side to side, it feels as if a wormhole has opened in the tree,
sucking the branches into the light’s force field.</p>
<p>I think this happens because of the Fresnel effect. If the light hits a branch
at just the exact angle, the wood (like many other materials in the world)
becomes unexpectedly shiny. In the brambles that hang off of a tree’s bare
branch, the only twigs that catch the light’s glint are the ones that make a
specific shallow angle with your eye. By radial symmetry, it’s not hard to see
that this should result in a glowing circular halo. It’s similar to how divers
see a <a href="https://en.wikipedia.org/wiki/Snell%27s_window">circular window</a> on the
surface of the water when they look up, or the <a href="https://en.wikipedia.org/wiki/22°_halo">22º
halo</a> you sometimes see around the sun
— but here the medium is wood, not water.</p>
<p>If you live in a place that has seasons, I encourage you to go and see this
effect for yourself. It’s a hard phenomenon to capture on camera — it
requires high dynamic range, high-resolution video, and patience outdoors when
it’s cold and dark. For now, here is the best I could do with my old iPhone.
(I stabilized the light with a little Python script.)</p>
<video style="width: 100%;" controls>
<source src="./static/tree-fresnel.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<p>For comparison, here are two frames from this video:</p>
<p><img src="./static/tree-fresnel-000.png" alt="Light outside tree"></p>
<p><img src="./static/tree-fresnel-196.png" alt="Light inside tree"></p>
]]></description>
<link>https://hardmath123.github.io/tree-fresnel.html</link>
<guid isPermaLink="true">https://hardmath123.github.io/tree-fresnel.html</guid>
<dc:creator><![CDATA[Hardmath123]]></dc:creator>
<pubDate>Sun, 26 Nov 2023 18:30:00 GMT</pubDate>
</item>
<item>
<title><![CDATA[Half a Jar of Honey]]></title>
<description><![CDATA[<p>Investigating the stability of honey containers</p>
<p>I knocked over a tall jar of honey. Clumsy, clumsy! But the jar was nearly empty; jars tend to fall when they are nearly empty.</p>
<p>But jars also tend to fall when they are full, which got me thinking: for both full and empty jars, we can argue by symmetry that their center of mass is halfway up. By Rolle’s Theorem, then, at some point when the jar is partially filled it should have a minimally- or maximally-high center of mass. It’s easy to see that the center of mass can never get higher than halfway up. This suggests that the jar’s stability increases and then decreases as the honey is consumed.</p>
<p>When is the jar most stable? Working in units where the empty jar’s height and mass are 1, its radius is $r$, and honey’s density is $\alpha$, the height of the center of mass is given by:
$$
c = \frac{\alpha r^2h(h/2) + 1/2}{\alpha r^2h + 1}
$$</p>
<p>Let’s assume honey is extremely viscous (i.e. changes shape slowly), and that the jar tips over if its center of mass is over its rim. Then the maximum angle you can tip the jar before it falls over is:
$$
\theta^\star = \tan^{-1}(r/c)
$$
The question is then, for a given $r$ and $\alpha$, what $h$ maximizes $\theta^\star$? Intuitively, it seems like it should be somewhere halfway between empty and full.</p>
<p>We can compute the optimal $h$ by differentiating $\theta^\star$ with respect to $h$ and setting the derivative to zero:
$$
\frac{d\theta^\star}{dh}=-\frac{r^2}{r^2+c^2}\frac{dc}{dh} = 0
$$
The positive solution to this, thanks to WolframAlpha, is given by:
$$
h^\star=\frac{1}{1+\sqrt{\alpha r^2+1}}
$$
Does this make sense? As $\alpha r^2$ increases (heavier honey), the optimal height gets lower, which makes sense because the honey begins to have a ballasting effect.</p>
<p>Now we can plug in some numbers. A standard 8-oz honey bottle is about 6 inches tall, has a base radius of about 1 inch, and weighs about 1oz. The density of honey is 0.8oz per cubic inch. In our units, this means the jar’s radius is $1/6$ and the density of honey is $0.8/(1/6)^3$. Plugging this in, we find remarkably that $h^\star \approx 0.3$. So a jar of honey is most stable when only about 1/3 full!</p>
]]></description>
<link>https://hardmath123.github.io/honey.html</link>
<guid isPermaLink="true">https://hardmath123.github.io/honey.html</guid>
<dc:creator><![CDATA[Hardmath123]]></dc:creator>
<pubDate>Sat, 11 Nov 2023 18:30:00 GMT</pubDate>
</item>
<item>
<title><![CDATA[How much should you stir your tofu?]]></title>
<description><![CDATA[<p>Can stirring more ever lead to worse outcomes?</p>
<p>Suppose you are cooking tofu, cut into tasty 1-inch cubes. On your stove it takes each cube-face 10 minutes to cook. You could cook each of the 6 faces of each cube in one hour (60 minutes) by rotating the cube to an uncooked face every 10 minutes–but this is clearly tedious and suboptimal.</p>
<p>A better idea is stirring: instead of carefully rotating each cube to an uncooked face, you can vigorously stir your pan and reorient each cube uniformly at random (like rolling dice).</p>
<p>The question I want to consider today is, “how often should you stir your tofu cubes”? For example, you might choose to stir them every 10 minutes for an hour, for a total of 6 stirs including right at the beginning. In that case, you might get lucky and have every face exposed to heat once. However, because each stir is an independent random roll, the more likely scenario is that some face gets cooked twice or more (burnt!) and other faces don’t get any heat (raw!). <strong>Intuitively, it seems like we should stir more frequently to “even out” the heat.</strong> Let’s check this intuition.</p>
<hr>
<p>Suppose we stir $n$ times over the course of the hour, and we are willing to tolerate being off by up to $\delta=5\%$ of optimal cookedness–that is, consider the tofu raw before 9 minutes and 30 seconds, and burnt after 10 minutes and 30 seconds. Let $f(n)$ be the proportion of burnt or raw faces, in expectation. We want to see how $f(n)$ depends on $n$.</p>
<p>We can compute $f(n)$ many ways. Most obviously, we can simulate the tofu system with some large number of cubes and see what happens.</p>
<pre><code class="lang-python">import numpy as np
from matplotlib import pyplot as plt
mean = 1/6
tol = 0.05
def experiment(n):
N = 1_000
rolls = np.random.multinomial(n, np.ones(6)/6, size=N)
toasts = rolls / n
burns = ((np.abs(toasts - mean) >= tol * mean) * 1.0).mean()
return burns
</code></pre>
<p>It is also not hard to work this out analytically. First, by linearity of expectation we can just look at one face, which acts like a binomial random variable: for each of the $n$ stirs, there is a $1/6$ probability of getting heat. We can subtract the CDFs at the tolerance limits $\mu\cdot(1\pm\delta)$ to compute the proportion of times the face is well-cooked, and then subtract from 1 to get the proportion of bad faces.</p>
<pre><code class="lang-python">import scipy.stats
def analytical(n):
return 1 - (
scipy.stats.binom.cdf( (1 + tol) * (n / 6), n, 1/6 ) -
scipy.stats.binom.cdf( (1 - tol) * (n / 6), n, 1/6 )
)
</code></pre>
<p>We can also use statistics tricks to approximate the answer in useful closed forms. For example, we can take a normal approximation to the binomial as $n \rightarrow \infty$. The mean is $\mu=1/6$, of course, and the variance is given by $\sigma^2 = p(1-p)/n = 5/36n$. We can then plug those parameters into the normal CDF, along with the tolerance limits as above.</p>
<pre><code class="lang-python">def normal(n):
vari = 1/6 * (1 - 1/6) / n
z = (tol * mean) / np.sqrt(vari)
return 1 - (scipy.stats.norm.cdf(z) - scipy.stats.norm.cdf(-z))
</code></pre>
<p>Finally, we can apply Chebyshev’s inequality or a Chernoff bound. These formulas only provide an upper bound on the proportion of bad faces, but they don’t depend on a normal approximation and are thus guaranteed to be true. I won’t work through the derivations here.</p>
<pre><code class="lang-python">def chebyshev(n):
vari = 1/6 * (1 - 1/6) / n
z = (tol * mean) / np.sqrt(vari)
return np.minimum(1., 1 / z ** 2)
def chernoff(n):
return np.minimum(1., 2 * np.exp(-(tol ** 2) / 3 * (1/6) * n))
</code></pre>
<p>Plotting all of these metrics out to asymptotic $n \rightarrow \infty$ (see the graph below), we see several expected patterns:</p>
<ol>
<li>As $n$ increases, the proportion of burnt faces goes down with exponential falloff.</li>
<li>The analytical solution closely tracks the simulation, at least until the variance of the simulation gets high enough to make it unreliable.</li>
<li>The normal approximation is indistinguishable from the analytical solution.</li>
<li>The bounds are quite loose. Chebyshev “wins” for a little while, but ultimately Chernoff’s exponentiality kicks in and dominates.</li>
</ol>
<p>So far, so good: it seems like you can get quite well-cooked tofu with only a logarithmic amount of stirring effort!</p>
<pre><code class="lang-python">ns = np.arange(1, 30000, 100)
plt.plot(ns, [experiment(int(n)) for n in ns], 'x', label='Simulation')
plt.plot(ns, [analytical(int(n)) for n in ns], '-', label='Analytical')
plt.plot(ns, [normal(int(n)) for n in ns], '--', label='Normal approximation')
plt.plot(ns, [chebyshev(int(n)) for n in ns], '-', label='Chebyshev\'s inequality')
plt.plot(ns, [chernoff(int(n)) for n in ns], '-', label='Chernoff bound')
plt.yscale('log')
plt.ylabel('P(burnt or raw face)')
plt.xlabel('n, the number of tosses per hour')
plt.legend()
</code></pre>
<pre><code><matplotlib.legend.Legend at 0x7fbf0b32ae60>
</code></pre><p><img src="static/tofubes/output_9_1.png" alt="png" /></p>
<p>But asymptotic $n$ is unreasonable in this setting: for example, 6000 flips per hour (6 kfph) is more than one flip per second. Let’s zoom in on small $n$ to see what is happening at the scale of the real world. In this plot, I’m hiding the two bounds (which are all maxed out at 1) and only showing up to $n=600$, which is a stir every 6 seconds.</p>
<pre><code class="lang-python">ns = np.arange(1, 600)
plt.plot(ns, [experiment(int(n)) for n in ns], 'x', label='Simulation')
plt.plot(ns, [analytical(int(n)) for n in ns], '.', label='Analytical')
plt.plot(ns, [normal(int(n)) for n in ns], '--', label='Normal approximation')
plt.ylabel('P(burnt or raw face)')
plt.xlabel('n, the number of tosses per hour')
plt.legend()
</code></pre>
<pre><code><matplotlib.legend.Legend at 0x7fbf09037460>
</code></pre><p><img src="static/tofubes/output_11_1.png" alt="png" /></p>
<p>A curious pattern emerges! Here is what I notice:</p>
<ol>
<li>The analytical solution (orange dots) continues to track the simulation (blue crosses).</li>
<li>However, they both diverge from the normal approximation (which is perhaps as we expect at low $n$).</li>
<li>This divergence is systematic. The analytical probabilities form an interesting “banded” structure.</li>
<li>Most surprisingly, sometimes increasing $n$ <em>increases</em> the number of bad faces!</li>
<li>To do better than $n=6$, you have to go all the way out to $n\approx 500$.</li>
</ol>
<p>The puzzle is, <strong>Why does more stirring cause worse cooking?</strong></p>
<p>Let’s consider the cases $n=6$ and $n=12$. For $n=6$, we already reasoned that a face is well-cooked if it is exposed to heat for only one of the 6 turns. This has probability $\binom{6}{1}(1/6)^1(6/5)^4 \approx 0.402$, and indeed the graph above shows that analytically, we expect a $0.6$ probability of a face getting burnt or being raw at $n=6$.</p>
<p>Now consider $n=12$. Now, a face is definitely well-cooked if it gets 2 turns on the heat, because $2/12 = 1/6 = \mu$. But what about 1 turn or 3 turns? In each case, it is off by a factor of $(1/12)/(1/6) = 0.5 \geq \delta$, so the face would either be raw or get burnt. Hence, the probability of a well-cooked face is given by $\binom{12}{2}(1/6)^2(5/6)^{10} \approx 0.296$, which yields a probability of $0.704$ that the face is raw or burnt. Higher than $0.6$ at $n=6$!</p>
<p>Another way to see what’s going on is to consider 2-faced pancakes instead of 6-faced tofu cubes. If you cook a pancake with $n=2$ random flips, you have a good chance that you will cook both sides (one on each flip), though of course there is some chance you burn one side by cooking it twice, and leave one side uncooked. But if you cook a pancake with $n=3$ flips, you will necessarily always burn one side and leave the other side uncooked, because at best you will get a 1/3-2/3 ratio of cooking time.</p>
<p>Returning to cubes now, let’s see what happens more generally. Say a face gets $X$ turns on the heat where $X \sim \text{Binom}(n, 1/6)$. The probability of being well-cooked is $\Pr[|X/n-\mu| \leq (1+\delta)\mu]$ where $\mu=1/6$. Breaking this up, we can sum over possible outcomes of $X$, to have $\sum_{(1-\delta)(1/6) \leq x/n \leq (1+\delta)(1/6)} \Pr[X=x]$. In other words, we are summing over integer multiples of $1/n$ in the range $[(1-\delta)(1/6), (1+\delta)(1/6))]$. How many integer multiples are there in that range? This is easily given by $\lceil (1/6)(1+\delta)/(1/n) \rceil - \lfloor (1/6)(1-\delta)/(1/n) \rfloor - 1$. We can plot this against the analytical probability to get some insight.</p>
<pre><code class="lang-python">ns = np.arange(1, 100)
plt.plot(ns, [analytical(int(n)) for n in ns], '.', label='Analytical')
plt.plot(
ns,
(np.ceil(ns * 1/6 * (1 + tol)) -
np.floor(ns * 1/6 * (1 - tol)) - 1),
label='Number of integer multiples of 1/n in range'
)
plt.xlabel('n, the number of tosses per hour')
plt.legend()
</code></pre>
<pre><code><matplotlib.legend.Legend at 0x7fc0105f1150>
</code></pre><p><img src="static/tofubes/output_13_1.png" alt="png" /></p>
<p>The spikes in the number-of-integers graph correspond directly to the jumps between bands. This suggests the following explanation of what’s going on: that there are two “forces” at play.</p>
<ol>
<li>The number of integer multiples of $1/n$ that could be matched within the range allowed by the tolerance, which jumps around in a quantized manner according to number theory.</li>
<li>The sum of probabilities of matching each of those integer multiples, which decreases as we add more flips, because there are more possible other-outcomes.</li>
</ol>
<p>When the first “force” spikes, it causes the probability of a bad face to drop suddenly because there are more ways to cook a good face. However, the second “force” causes the probability of a bad face to rise gradually because each way to make a good face becomes less likely. This explains the banding structure in the graph above.</p>
<p>To summarize, in the limit, continuous stirring indeed helps cook your tofu more evenly. However, if you are stirring only occasionally, then sometimes more stirring can actually harm your tofu!</p>
]]></description>
<link>https://hardmath123.github.io/tofubes.html</link>
<guid isPermaLink="true">https://hardmath123.github.io/tofubes.html</guid>
<dc:creator><![CDATA[Hardmath123]]></dc:creator>
<pubDate>Mon, 05 Jun 2023 18:30:00 GMT</pubDate>
</item>
<item>
<title><![CDATA[Light from shadow, seeing from seeing]]></title>
<description><![CDATA[<p>Thoughts on a blue and yellow photograph I took this summer</p>
<p>This summer I was briefly in Vancouver, and after dinner on my last day I found
myself walking several blocks back to my hotel. The night was dark but the
sidewalk was brightly lit and it was a lovely journey. Along the way, I took
this picture:</p>
<p><img src="static/vancouver-streetlight.jpg" alt="A streetlight casts two shadows, blue and
yellow"></p>
<p>What struck me is how the streetlight casts two shadows, blue and yellow, and
moreover those shadows appear opposite the yellow and blue lamps, respectively.
What could possibly be going on?</p>
<p>Here is an explanation: the two lamps together create a kind of grayish-white
light that bathes the sidewalk. Where the yellow lamp is occluded, the blue
light is dominant, so the shadow is blue. Similarly, where the blue lamp is
occluded, the yellow light is dominant, so the shadow is yellow.</p>
<p>Looking at this scene I’m reminded of painter Wayne Thiebaud’s rich, saturated
shadows. You could say the perceptual effect here demonstrates that “white
light” is the sum of all wavelengths, a fact we learn in grade school (I think
there is an exhibit at the SF Exploratorium with a similar concept). But to me,
this also demonstrates the range of what we are willing to call “white.” If one
of the lamps were to burn out, our eyes would adjust to the blue or yellow
almost immediately, and we would still see the sidewalk as gray — we
experience a truly remarkable “color constancy” across lighting conditions. In
this way, when he paints a shadow as a saturated, non-gray color, Thiebaud sees
beyond his own seeing.</p>
<hr>
<p>Update on August 31, 2025</p>
<p><img src="./static/cambridge-streetlight.png" alt=""></p>
]]></description>
<link>https://hardmath123.github.io/thiebaud.html</link>
<guid isPermaLink="true">https://hardmath123.github.io/thiebaud.html</guid>
<dc:creator><![CDATA[Hardmath123]]></dc:creator>
<pubDate>Sun, 25 Sep 2022 18:30:00 GMT</pubDate>
</item>
<item>
<title><![CDATA[PeLU: Porcelain-Emulated Linear Unit]]></title>
<description><![CDATA[<p>A low-power deep learning inference mechanism inspired by flush toilets</p>
<p>The other day my toilet broke and I involuntarily learned a lot about how flushing works. My friend suggested an analogy for me: flushing toilets is like a neuron’s activation: once a critical threshold is met, there’s an “all-at-once” response.</p>
<p>That got me thinking, could we implement deep neural networks in plumbing? It turns out, the answer is yes! A very simplified model of a flush toilet’s nonlinear behavior is as follows: it’s a bucket, into which water can be poured, and there is a hole at height $h \geq 0$. If you pour in volume $v$ of water into the bucket, the output that flows out of the hole is $\text{ReLU}(v - h)$.</p>
<p>The second component we need to build a neural network is a linear map. We can do this by attaching a branching pipe to the hole. This component will have $k$ branches with cross-sectional areas $A_1, A_2, \dots, A_k > 0$. By conservation of mass and a simple pressure argument, the amount of water that pours out of branch $i$ is $A_i / \Sigma_j A_j$.</p>
<p>Together, these components allow us to compute a function from $\mathbb{R}\rightarrow \mathbb{R}^k$, which looks something like $\text{PeLU}(v, \vec{A}, h) = \text{ReLU}(v - h)\cdot \vec{A} / \Sigma_j A_j$. Here, “PeLU” stands for “Porcelain-Emulated Linear Unit.” It is clear how to vectorize this expression over $v$, which effectively creates a new kind of neural network “layer” with trainable parameters $\vec{A}$ and $h$ for each input dimension. To enforce the positivity constraint on $h$ and $A_i$, we will actually work with the following key equation: $\text{PeLU}(v, \vec{A}, h) = \boxed{\text{ReLU}(v - h^2) \cdot \text{softmax}(\vec{A})}$.</p>
<p>All that is left to do at this point is to implement this in PyTorch and train it.</p>
<pre><code>import torch
class PeLU(torch.nn.Module):
def __init__(self, in_feat, out_feat):
super().__init__()
self.heights = torch.nn.Parameter(
torch.randn(in_feat))
self.weights = torch.nn.Parameter(
torch.randn(in_feat, out_feat)
)
def forward(self, X):
X = torch.nn.functional.relu(X - self.heights ** 2)
X = X.matmul(self.weights.softmax(dim=1))
return X
</code></pre><p>Here, I built a PeLU layer that can be slipped into any PyTorch model, mapping <code>in_feat</code> inputs to <code>out_feat</code> outputs. Next, let’s stack some PeLU layers together and train the result on the classic “Iris” dataset, which has 4 features and assigns one of 3 labels. We will create a “hidden layer” of size 3, just to keep things interesting.</p>
<pre><code>from iris import feat, labl
m = torch.nn.Sequential(
PeLU(4, 3),
PeLU(3, 3)
)
o = torch.optim.Adam(m.parameters(), lr=0.01)
lf = torch.nn.CrossEntropyLoss()
for i in range(10_000):
o.zero_grad()
pred = m(feat * 10)
loss = lf(pred, labl)
loss.backward()
o.step()
print('Loss:', loss)
print('Error:', 1. - ((torch.argmax(pred, dim=1) == labl) * 1.).mean())
</code></pre><p>This trains very quickly, in seconds, and gives an error of 2%. Of course, we haven’t split the dataset into a train/test set, so may be be overfitting.</p>
<p>By the way, you may have noticed that I multiplied <code>feat</code> by 10 before passing it to the model. Because of the conservation of mass, the total amount of water in the system is constant. But each bucket “loses” some water that accumulates below the hole. To make sure there’s enough water to go around, I boosted the total amount.</p>
<p>But that’s all that needs to be done! Once we export the parameters and write a small visualizer…</p>
<p><img src="static/iris-pelu.gif" alt="GIF of PeLU network inferring an Iris example"></p>
<p>Isn’t that a delight to watch? I may even fabricate one to have as a desk toy — it shouldn’t be hard to make a 3D-printable version of this. If we wanted to minimize the number of pipes, we could add an L1 regularization term that enforces sparsity in the $\vec{A}$ terms.</p>
<p>Some final thoughts: this system has a couple of interesting properties. First, there’s a kind of “quasi-superposition” that it allows for: if you pour in more water on top to incrementally refine your input, the output will automatically update. Second, the “conservation of mass” guarantees that the total water output will never exceed the total water input. Finally, it’s of course entirely passive, powered only by gravity.</p>
<p>This has me wondering if we can build extremely low-power neural network inference devices by optimizing analog systems using gradient descent in this way (a labmate pointed me to <a href="https://arstechnica.com/science/2018/07/neural-network-implemented-with-light-instead-of-electrons/">this</a>, for example).</p>
<p>Below is a little widget you can use to enjoy playing with PeLU networks. All inputs must be between 0 and 10. :)</p>
<p>Sepal length (cm): <input id="sl" type="text" value="6.2"></input><br/>
Sepal width (cm): <input id="sw" type="text" value="3.4"></input><br/>
Petal length (cm): <input id="pl" type="text" value="5.4"></input><br/>
Petal width (cm): <input id="pw" type="text" value="2.3"></input><br/></p>
<p><input type="button" id="go" value="Predict!"></input><br/></p>
<canvas id="world" width="500" height="500"></canvas>
<script>
var m = [{'h': [2.0516297817230225, 2.18482890761347e-30, 4.2594499588012695, 2.937817096710205], 'w': [[0.08244021981954575, 0.35453349351882935, 0.5630263090133667], [0.783989667892456, 0.0022806653287261724, 0.2137296199798584], [0.0007646767771802843, 0.8042643070220947, 0.19497093558311462], [0.0004950931761413813, 0.9982838034629822, 0.0012211321154609323]]}, {'h': [2.2704419876575757e-26, 33.44374465942383, 12.223723411560059], 'w': [[0.9706816077232361, 0.021526599302887917, 0.007791891228407621], [0.0013629612512886524, 0.022002533078193665, 0.9766345620155334], [0.018276285380125046, 0.9780184626579285, 0.0037053129635751247]]}];
function Chamber(x, y, f, h, w, p) {
this.x = x;
this.y = y;
this.f = f;
this.h = h;
this.w = w;
this.p = p;
this.l = null;
}
Chamber.prototype.flow = function(dl) {
if (this.f <= this.h) return;
dl = Math.max(0.1 * this.f - this.h, dl);
for (var j = 0; j < this.p.length; j++) {
this.p[j].f += dl * this.w[j];
}
this.f -= dl;
};
Chamber.prototype.draw = function(ctx) {
var height = 100;
ctx.save();
ctx.translate(this.x, this.y);
if (this.l !== null) {
ctx.save();
ctx.font = '8pt Helvetica';
ctx.translate(0, height);
ctx.rotate(-Math.PI / 2);
ctx.fillText(this.l, 0, 5);
ctx.restore();
}
ctx.fillStyle = '#acf';
ctx.fillRect(10, height - this.f, 30, this.f);
ctx.beginPath();
ctx.arc(0, 10, 10, -Math.PI / 2, 0);
ctx.lineTo(10, height);
ctx.lineTo(40, height);
ctx.lineTo(40, 10);
ctx.arc(50, 10, 10, Math.PI, -Math.PI / 2);
ctx.stroke();
if (this.h !== null) {
ctx.beginPath();
ctx.arc(35, height - this.h, 5, 0, Math.PI * 2, true);
ctx.stroke();
}
ctx.restore();
if (this.h !== null) {
for (var j = 0; j < this.p.length; j++) {
ctx.save();
ctx.beginPath();
ctx.moveTo(this.x + 35, this.y + height - this.h);
ctx.bezierCurveTo(
this.x + 35, this.y + height - this.h + 20,
this.p[j].x + 25, this.p[j].y - 20,
this.p[j].x + 25, this.p[j].y
);
ctx.strokeStyle = 'black';
ctx.lineWidth = this.w[j] * 10;
ctx.stroke();
ctx.lineWidth = ctx.lineWidth * 0.8;
ctx.strokeStyle = this.f > this.h ? '#acf' : 'white';
ctx.stroke();
if (this.f > this.h) {
ctx.strokeStyle = '#acf';
ctx.beginPath();
ctx.moveTo(this.p[j].x + 25, this.p[j].y);
ctx.lineTo(this.p[j].x + 25, this.p[j].y + 100);
ctx.stroke();
}
ctx.restore();
}
}
if (this.h === null) {
var best = true;
for (var j = 0; j < cs[cs.length - 1].length; j++) {
if (this.f < cs[cs.length - 1][j].f) best = false;
}
if (best) {
ctx.save();
ctx.fillStyle = 'rgba(255, 255, 0, 0.2)';
ctx.fillRect(this.x - 10, this.y - 10, 50 + 20, height + 20);
ctx.restore();
}
}
};
var cs = [];
for (var i = 0; i < m.length; i++) {
cs.push([]);
for (var j = 0; j < m[i].h.length; j++) {
var c = new Chamber(30 + 20 * i + 80 * j, 20 + 150 * i, 0, m[i].h[j], m[i].w[j], []);
cs[cs.length - 1].push(c);
}
}
cs.push([]);
for (var j = 0; j < cs[cs.length - 2][0].w.length; j++) {
var c = new Chamber(30 + 20 * i + 80 * j, 20 + 150 * i, 0, null, [], []);
cs[cs.length - 1].push(c);
}
for (var i = 0; i < m.length; i++) {
for (var j = 0; j < cs[i].length; j++) {
cs[i][j].p = cs[i + 1];
}
}
cs[0][0].l = 'sepal length (cm)';
cs[0][1].l = 'sepal width (cm)';
cs[0][2].l = 'petal length (cm)';
cs[0][3].l = 'petal width (cm)';
cs[2][0].l = 'P(iris setosa)';
cs[2][1].l = 'P(iris versicolour)';
cs[2][2].l = 'P(iris virginica)';
cs[0][0].f = 62;
cs[0][1].f = 34;
cs[0][2].f = 54;
cs[0][3].f = 23;
function frame() {
var world = document.getElementById('world');
world.width = world.width;
var ctx = world.getContext('2d');
for (var i = 0; i < cs.length; i++) {
for (var j = 0; j < cs[i].length; j++) {
cs[i][j].draw(ctx);
}
}
for (var i = 0; i < cs.length - 1; i++) {
for (var j = 0; j < cs[i].length; j++) {
cs[i][j].flow(0.1);
}
}
window.requestAnimationFrame(frame);
}
window.addEventListener('load', function() {
var sl = document.getElementById('sl');
var sw = document.getElementById('sw');
var pl = document.getElementById('pl');
var pw = document.getElementById('pw');
var go = document.getElementById('go');
go.addEventListener('click', function() {
for (var i = 0; i < cs.length; i++) {
for (var j = 0; j < cs[i].length; j++) {
cs[i][j].f = 0.;
}
}
cs[0][0].f = Math.max(0., Math.min(100., (parseFloat(sl.value) || 0) * 10));
cs[0][1].f = Math.max(0., Math.min(100., (parseFloat(sw.value) || 0) * 10));
cs[0][2].f = Math.max(0., Math.min(100., (parseFloat(pl.value) || 0) * 10));
cs[0][3].f = Math.max(0., Math.min(100., (parseFloat(pw.value) || 0) * 10));
});
frame();
});
</script>]]></description>
<link>https://hardmath123.github.io/pelu.html</link>
<guid isPermaLink="true">https://hardmath123.github.io/pelu.html</guid>
<dc:creator><![CDATA[Hardmath123]]></dc:creator>
<pubDate>Fri, 03 Dec 2021 18:30:00 GMT</pubDate>
</item>
<item>
<title><![CDATA[Birds and the Representation of Representation]]></title>
<description><![CDATA[<p>What is it about birds?</p>
<h2 id="toni-morrison-nobel-lecture">Toni Morrison, Nobel Lecture</h2>
<blockquote>
<p>Speculation on what (other than its own frail body) that bird-in-the-hand might signify has always been attractive to me, but especially so now thinking, as I have been, about the work I do that has brought me to this company. <a href="https://www.nobelprize.org/prizes/literature/1993/morrison/lecture/">(full)</a></p>
</blockquote>
<h2 id="richard-siken-the-language-of-the-birds-">Richard Siken, “The Language of the Birds”</h2>
<blockquote>
<p>And just because you want to paint a bird, do actually paint a bird, it doesn’t mean you’ve accomplished anything. <a href="https://poets.org/poem/language-birds">(full)</a></p>
</blockquote>
<h2 id="cross-examination-in-_brancusi-v-united-states_">Cross-examination in <em>Brancusi v. United States</em></h2>
<blockquote>
<p><strong>Waite:</strong> What do you call this?<br><strong>Steichen:</strong> I use the same term the sculptor did, oiseau, a bird.<br><strong>Waite:</strong> What makes you call it a bird, does it look like a bird to you?<br><strong>Steichen:</strong> It does not look like a bird but I feel that it is a bird, it is
characterized by the artist as a bird.<br><strong>Waite:</strong> Simply because he called it a bird does that make it a bird to you?<br><strong>Steichen:</strong> Yes, your honor.<br><strong>Waite:</strong> If you would see it on the street you never would think of
calling it a bird, would you?<br>[<strong>Steichen:</strong> Silence]<br><strong>Young:</strong> If you saw it in the forest you would not take a shot at it?<br><strong>Steichen:</strong> No, your honor. <a href="https://www.legalaffairs.org/issues/September-October-2002/story_giry_sepoct2002.msp">(more)</a></p>
</blockquote>
<h2 id="donna-tartt-_the-goldfinch_">Donna Tartt, <em>The Goldfinch</em></h2>
<blockquote>
<p>But who knows what Fabritius intended? There’s not enough of his work left to even make a guess. The bird looks out at us. It’s not idealized or humanized. It’s very much a bird.</p>
</blockquote>
<h2 id="adam-savage-my-obsession-with-objects-and-the-stories-they-tell-">Adam Savage, “My obsession with objects and the stories they tell”</h2>
<blockquote>
<p>And then there is this fourth level, which is a whole new object in the world: the prop made for the movie, the representative of the thing, becomes, in its own right, a whole other thing, a whole new object of desire. . . . There are several people who own originals, and I have been attempting to contact them and reach them, hoping that they will let me spend a few minutes in the presence of one of the real birds, maybe to take a picture, or even to pull out the hand-held laser scanner that I happen to own that fits inside a cereal box, and could maybe, without even touching their bird, I swear, get a perfect 3D scan. And I’m even willing to sign pages saying that I’ll never let anyone else have it, except for me in my office, I promise. I’ll give them one if they want it. And then, maybe, then I’ll achieve the end of this exercise. But really, if we’re all going to be honest with ourselves, I have to admit that achieving the end of the exercise was never the point of the exercise to begin with, was it? <a href="https://www.youtube.com/watch?v=29SopXQfc_s">(full)</a></p>
</blockquote>
<h2 id="michael-shewmaker-the-curlew-">Michael Shewmaker, “The Curlew”</h2>
<blockquote>
<blockquote>
<p>Plate 357 <em>(Numenius Borealis)</em> is the only instance in which the subject appears dead in the work of John James Audubon.</p>
</blockquote>
<p>He waits alone, sketching angels from the shade—<em>a kind of heavenly bird,</em> he reasons with himself—although their wings are broke, faces scarred, each fragile mouth feigning the same sad smile as the one before it. Offered triple his price to paint a likeness of the pastor’s daughter—buried for more than a week—he reluctantly agreed—times being what they are.<br>. . . . .<br>And yet he studies it—from behind the dunes—studies its several postures, grounded and in sudden flight—and not content to praise it from a distance, to sacrifice detail, unpacks his brushes and arranges them before raising his rifle and taking aim.</p>
</blockquote>
<hr>
<h2 id="richard-hunt-s-quest-for-notation-for-birdsong">Richard Hunt’s quest for notation for birdsong</h2>
<p>(Added April 18, 2023)</p>
<blockquote>
<p>Zizzy, uncanny, pebble-tapping, lusty, pule. Ventriloquial, tantara,
feminine, crepitate. Tintinnabulation, sough, devil’s tattoo. Sparrowy.</p>
</blockquote>
<p>(<a href="https://daily.jstor.org/what-it-sounds-like-when-doves-cry/">See the article for
more…</a>)</p>
<h2 id="children-imitating-cormorants">Children imitating cormorants</h2>
<p>(Added June 2023)</p>
<p>By <a href="https://hellopoetry.com/poem/15275/children-imitating-cormorants/">Kobayashi
Issa</a>;
mentioned by a professor on a walk by the Charles River.</p>
<blockquote>
<p>Children imitating cormorants<br>are even more wonderful<br>than cormorants.</p>
</blockquote>
<h2 id="fake-birds-in-disneyland">Fake birds in Disneyland</h2>
<p>(Added June 2023)</p>
<p>From Philip K. Dick’s speech, <a href="https://urbigenous.net/library/how_to_build.html">“How to build a universe.”</a></p>
<blockquote>
<p>In my writing I got so interested in fakes that I finally came up with the
concept of fake fakes. For example, in Disneyland there are fake birds worked
by electric motors which emit caws and shrieks as you pass by them. Suppose
some night all of us sneaked into the park with real birds and substituted
them for the artificial ones. Imagine the horror the Disneyland officials
would feel when they discovered the cruel hoax. Real birds! And perhaps
someday even real hippos and lions. Consternation.</p>
</blockquote>
<h2 id="a-representation-of-an-idea">A representation of an idea</h2>
<p>(Added June 2023)</p>
<p>From Eric Kraft’s <em>Where do we stop?</em></p>
<blockquote>
<p>The idea that seemed so bright when it was leaping and darting and fluttering through my mind looked dull and dead when I’d caught it and pinned it to my paper … It wasn’t an idea now, but the representation of an idea. It didn’t fly, didn’t flutter by, didn’t catch the eye as I thought it would. (154)</p>
</blockquote>
<h2 id="symbolic-of-my-entire-existence">Symbolic of my entire existence</h2>
<p>(Added June 2023)</p>
<p>From CAKE’s Mr. Mastodon Farm</p>
<blockquote>
<p>Now due to a construct in my mind<br>That makes their falling and their flight<br>Symbolic of my entire existence<br>It becomes important for me<br>To get up and see<br>Their last-second curves toward flight</p>
<p>It’s almost as if my life would fall<br>Unless I see their ascent.</p>
</blockquote>
<p>See also: the poem “<a href="https://poets.org/poem/because-you-asked-about-line-between-prose-and-poetry">Because You Asked about the Line Between Prose and
Poetry</a>“
by Howard Nemerov.</p>
<blockquote>
<p>Sparrows were feeding in a freezing drizzle<br>That while you watched turned to pieces of snow<br>Riding a gradient invisible<br>From silver aslant to random, white, and slow.</p>
<p>There came a moment that you couldn’t tell.<br>And then they clearly flew instead of fell.</p>
</blockquote>
<h2 id="why-illustrate-">Why illustrate?</h2>
<p>From an
<a href="https://www.nytimes.com/2023/07/06/science/birding-illustration-sibley.html">interview</a>
with illustrator David Sibley.</p>
<blockquote>
<p>An illustration provides so much more than a photograph. In an illustration,
I can create a typical bird, an average bird of a species in the exact pose
that I want, and create an image of a similar species in exactly the same
pose so that all the differences are apparent… Your drawing becomes a
record of your understanding of that bird in that moment.</p>
</blockquote>
<h2 id="unseeable-birds">Unseeable birds</h2>
<p>(See Edgerton’s photograph of Mrs. May Rogers Webster with her hummingbirds.)</p>
<h2 id="my-crow">My Crow</h2>
<p>A poem by Raymond Carver</p>
<blockquote>
<p>A crow flew into the tree outside my window.<br>It was not Ted Hughes’s crow, or Galway’s crow.<br>Or Frost’s, Pasternak’s, or Lorca’s crow.<br>Or one of Homer’s crows, stuffed with gore,<br>after the battle. This was just a crow.<br>That never fit in anywhere in its life,<br>or did anything worth mentioning.<br>It sat there on the branch for a few minutes.<br>Then picked up and flew beautifully<br>out of my life. </p>
</blockquote>
<h2 id="birdman">Birdman</h2>
<p>“A thing is a thing, not what is said of that thing.” (on Riggan’s dressing
room mirror)</p>
<h2 id="the-bird-watcher-s-christmas-dinner">The Bird Watcher’s Christmas Dinner</h2>
<p>Poem by Scott Bates, via <a href="https://betterlivingthroughbeowulf.com/birds-as-heavenly-messengers/">Better
Living</a></p>
<blockquote>
<p>They wait their turns with impatience<br>Perched on the cedar by the fence<br>Like so many Christmas ornaments,</p>
<p>Cardinal, goldfinch and chickadee,<br>Turning it, trismegistically,<br>Into an ancient Christmas tree</p>
</blockquote>
<h2 id="sentences-toward-birds">Sentences Toward Birds</h2>
<p>Al Filreis
<a href="https://readalittlepoetry.com/2025/08/13/sentences-toward-birds-by-robert-grenier/">says</a>
of Robert Grenier’s collection:</p>
<blockquote>
<p>One of the bird-card-poems reads this way:</p>
<blockquote>
<p>sing songs to crop duster</p>
</blockquote>
<p>This is the one I can’t get out of my mind. The birds are singing. Or perhaps
it’s more accurate to say that they are being asked to sing by the poet. Not
to us or for us—which would be the traditional lyric gesture. No, the song
(the poem) is an ode to something else flying overhead. It’s mundane,
agricultural, mechanical, noisy, unnatural, unlyric. Yet it is just as
eligible—as a recipient of beautiful tonal inflections—as any word or phase
that follows the “to” in the line. This, like many of the cards in “Sentences
Toward Birds,” is a meta-poem: a poem about the poem as a song sung to
anything.</p>
</blockquote>
<h2 id="ars-poetica-of-partridges-palestine">ars poetica of partridges & palestine</h2>
<blockquote>
<p><strong>ars poetica of partridges & palestine</strong><br><em>Mandy Shunnarah</em></p>
<p>Sedo told me once our last name means partridge—<br>that sweet little bird in the pear tree every Christmas.<br>I’m looking for metaphors on Wikipedia again. It’s easy<br>to write poems about birds with so many species of partridges.<br>National Geographic says 43 of those species are decreasing<br>in population; something Palestinians know all too well.<br>People like poems about birds more than they like poems<br>about Palestine, & actual Palestine & her endangered people.<br>We just won’t go extinct quickly enough. But I digress.<br>That’s not the metaphor I’m looking for just yet.<br>An avian ecologist said partridges<br>are ground-dwellers, unlikely to roost in pear trees.<br>Those first day of Christmas birds of the family Phasianidae,<br>presents from one’s true love, were put in pear trees<br>against their will—branches like an open-air prison<br>the world ignores because at least they can still see sky.<br>Some might say I’m reaching, but that’s what metaphors do.<br>But inside them, there’s always a feather of truth.<br>This I know: when it comes to partridges & Palestine,<br>the pervasive popular messaging around us both is false.<br>The difference is, everyone knows senselessly killing birds is wrong. </p>
</blockquote>
]]></description>
<link>https://hardmath123.github.io/birds-and-the-representation-of-representation.html</link>
<guid isPermaLink="true">https://hardmath123.github.io/birds-and-the-representation-of-representation.html</guid>
<dc:creator><![CDATA[Hardmath123]]></dc:creator>
<pubDate>Wed, 20 Oct 2021 18:30:00 GMT</pubDate>
</item>
</channel>
</rss>