EO. Spaghetti Loops

Just why does e appear in so many guises?


This week we pose two interesting e related puzzles:

1) Obtain a large bowl with N strands of spaghetti; grab two loose ends and tie them together. Repeat, until all the loose ends are paired. You will now have a bowl full of loops of spaghetti. On average, what is the expected number of loops?

2) N people walk into a room; each of their (unique) names has been written on a nametag and placed into a bowl. If each person picks a nametag at random, what is the probability that no one gets the right name?

In both cases, the interesting thing is what happens as N increases without bound.

When we were done taping, I remarked to Kyle that, well, surely that’s the end of e related stuff for a while. But I just remembered one of the best e related puzzles of all. We’ll add it here as a bonus:

3) Someone has written counting numbers, one on each of N cards. You don’t have any idea what the largest number is. The cards are shuffled and arranged in a line face down.

You turn the cards over, discarding the cards one by one; you may stop at any time. Your goal is to pick the card with the largest number. (You can’t go back and retrieve a discarded card, and you can’t continue once you stop).

Your strategy, then, is to flip over some number M of cards just to see what the field is like, then taking the first card better than any of the cards in your test sample.

You don’t want M to be too small– you need to get a feel for how big the numbers might be; but you don’t want M to be too big— you don’t want to actually waste the biggest number in your test.

Amazingly, the optimal M works 1/e (almost 37%!) of the time. What is this M, and why does this work?

7 Comments »

  1. stevestyle said,

    October 8, 2008 at 3:10 pm

    I’m reposting because some text seemed to go missing in the middle. Not sure if that was me or thee.

    I agree with empire. Good questions but they are getting harder! I’m hoping Chaim will post the answers at some point.

    I’ve seen the spaghetti one before so I’ll leave that to others, have fun in the kitchen!

    2) I have a partial solution.

    I’m quite interested in the parties Chaim goes to where the guests put their nametags in a bowl. Are math conferences more fun than I’ve imagined!

    Like the creators of the infinite improbability drive I don’t get invited to those sorts of parties, I just calculate their probabilities and complain about the loud music.

    Suppose there are N guests. Let the chance of them all getting the wrong nametag be p(N).

    The first guest, A, has an (N-1) in N chance of picking the wrong nametag. Let’s say they pick the nametag for B. Now consider B. They can’t pick their own nametag as it has already been picked. So suppose they pick C’s nametag. Now consider C, again we know their nametag is taken by someone else, B. We can continue like this until we find the person who picks the nametag for A.

    Some examples with probabilities:

    A picks A – probability 1 in N, but we fail in our goal of everyone having the wrong nametag.

    A picks B, B picks A – probability (N-1)/N x 1/(N-1) = 1/N

    A picks B, B picks C, C picks A – probability (N-1)/N x (N-2)/(N-1)x 1/(N-2) = 1/N

    A picks B, B picks C, C picks D, D picks A – probability (N-1)/N x (N-2)/(N-1)x (N-3)/(N-2) x 1/(N-3)= 1/N

    and so on. Note that each loop contains the same nametags as guests and so the remaining nametags match the remaining guests.

    The length of the loop is between 1 and N in length, with each possible length having a probability of 1/N. Of course only loops of length 2 or more will work for us and this will leave between 0 and N-2 guests unaccounted for.

    Suppose our loop was N-r in length so there are r guests unaccounted for. Since the remaining nametags match the remaining guests we are at the starting position but now we have r nametags and guests, so the chance of success is now p(r).

    By summing over the possible lengths we have the following formula:

    p(N) = SUM( 0 <= r<= N-2 )( 1/N x p(r) )

    and so

    Np(N) = SUM( 0 <= r =0 (N+2)p(N+2) = (N+1)p(N+1) + p(N)

    Unfortunately I have no idea how to solve this, but it looks to me as though it should involve e (is that convincing?)

    I did try substituting q(N) = Np(N). You need to start with N=1 to avoid issues with zeroes. This gives:

    q(1) = 1; q(2) = 1; for N>=1 q(N+2) = q(N+1) + q(N)/N

    However this didn’t seem to help at all, so I valiantly retreated. That’s to say I gave up.

    3) Again I don’t have a nice solution, but I do have a formula of sorts.

    Let’s assume that the numbers written on the N cards are 1 to N.

    If we win then the largest number in the first M cards is between M and N-1. Say it is r. The probability of us winning with the highest card being r is the product of the following probabilities:
    r is in the first M cards; probability M/N;
    all higher cards are not in the first M cards; (N-M)/n x (N-M-1)/(N-1) x … x (r-M + 1)/(r+1);
    M is the leftmost of the higher cards; 1/(N-r)

    This product is M/n(n-r) x (n-M)!/(r-M)! x r!/n!

    So all that’s left is to sum this for r between M and n-1, then calculate the optimum M for any n, then take the limit as n increases.

    It looks to me as though the answer has something to do with e.

  2. jlundell said,

    October 8, 2008 at 5:13 pm

    Speaking of quincunxes, Galton so named his device because the arrangement of pins resembles a repetition of the quincunx pattern on a die.

    And speaking of words, “plinko” owes its name in part to pachinko, a Japanese game that functions as a cross between a pinball machine and a slot machine, but whose mechanism is Galton-quincunx-link.

    Finally, it’s interesting that the Magic Putnam problem is a kind of inverse of the normal curve, and thus gives us an somewhat intuitive idea of how that curve takes shape.

  3. cthemann said,

    October 12, 2008 at 11:10 pm

    First time here. I mostly just wanted to see if anyone was shedding any light on the problems.

    I got none of them, but had some thoughts that might
    1. I have made several false starts on this one, but have little to add. From a little experimentation (with numbers, not noodles), I think the number is relatively small compared to N and increases much slower than N. I think it may even converge on a relatively small number (maybe 2 or phi).

    2. This problem turned out much harder than I thought. I tried starting with the statistical odds of all getting the right one (1/N!), but couldn’t work it the other way because at all steps after the first, I am unsure of the number of loops involved. As Steve said, the first person has a (N-1)/N chance of getting the wrong one, but the second may have an (N-1)/(N-1) (100%) chance or may have (N-2)/(N-1). If you multiply the first times the second for each case, you get ((N-2)/N)*(N-2)/(N-1), which is the chance that A did not draw A or B, plus 1/N*(N-1)/(N-1), which is the chance that A drew B. So the odds of the first 2 being wrong should be ((N-2)^2)/(N*(N-1)) Doing it this way, the formula gets really long. I think knowing the loops solves this, though. Essentially, if we have an expected value, we can get a probability of one loop, which is no one connecting with their own name tag.

    3. I am not certain on this one if we are relating M to the number of cards or the numbers we draw. If the number of cards is unknown, but the counting numbers are consecutive, I think M should relate to the value of the first card drawn, where a sample M cards large is taken and the next card to exceed the largest in the sample is the stopping point. I also had the thought that you could go to the first card that exceeded your current largest value by exactly one, but again, this assumes that consecutive numbers have been used.

  4. Paul J. Nahin said,

    March 13, 2011 at 6:28 am

    I first saw the Spaghetti Loop problem in Steve Strogatz’s book THE CALCULUS OF FRIENDSHIP (Princeton 2009), but now see that the problem pre-dates that book’s appearance. After reading Strogatz’s book, I mentioned the problem in a talk I gave at a math conference in Monterey, CA, in December 2009. Matt Davis of Chabot College, CA was in the audience, and a couple of days later he sent me a beautiful solution: the expected number of loops is E(n)= 1+(1/3)+(1/5)+…+(1/(2n-1)). What is really surprising about this is how slowly the expected number grows with n. For example, E(10,000)=5.59.

  5. Shawn said,

    February 5, 2012 at 2:23 am

    Wolfram Alpha seems to indicate that the definite integral from -inf to inf of e^-x^2 is just the square root of pi. {“integral_(-infinity)^infinitye^(-x^2) dx = sqrt(pi)~~1.77245”}

  6. strauss said,

    February 5, 2012 at 1:55 pm

    True, and important (Nice proof too)
    But interestingly e^-x^2 has no elementary antiderivative, that is, no function that can be assembled from the sorts of functions one usually sees: polynomials, trig stuff, e’s, logs etc. It’s useful enough, though, that “erf” was invented for just this purpose.

  7. Shawn said,

    February 5, 2012 at 5:10 pm

    So the area is, in fact, sqrt(pi)/2?

RSS feed for comments on this post · TrackBack URL

Leave a Comment

You must be logged in to post a comment.

The Math Factor Podcast Website


Quality Math Talk Since 2004, on the web and on KUAF 91.3 FM


A production of the University of Arkansas, Fayetteville, Ark USA


Download a great math factor poster to print and share!

Got an idea? Want to do a guest post? Tell us about it!

Heya! Do us a favor and link here from your site!