Science

Carl Sagan, Immortalized in SpokePOV

OK, maybe “immortalized” isn’t exactly the right word for a portrait in LEDs that only appears when in motion, but it’s close enough.

SpokePOV is a kit you can build that will let you program images that appear on your bicycle’s wheel when you ride at a certain speed. The “POV” stands for “persistence of vision,” which in a colloquial sense refers to the optical illusion in which still images can appear to be moving, as when you watch a film, and a moving object can display a static image, as in SpokePOV. SpokePOV uses lines of LEDs on a circuit board with a small computer attached. The circuit board is strapped to the spokes of your bike wheel and a magnet is attached to the bike so that the computer can tell when to flash the LEDs. When the wheel is spinning fast enough, people watching will see an LED image on your bike wheel as you fly past.

An interesting side note: the phrase “persistence of vision” comes from the idea that an image remains on your retina for a brief period of time, even after the image itself has changed. An early hypothesis suggested that this is why we perceive films as fluid motion, but that hypothesis has been disproved. These researchers would like to replace it with the phrase “short range apparent motion,” but I’m unsure of where that leaves illusions like SpokePOV or these novelty clocks, since they aren’t creating apparent motion but apparent static images. Beta movement also doesn’t seem to fit, nor does phi phenomenon. Any psychologists in the audience, please feel free to weigh in, here.

Anyway, SpokePOV is a fun project and fairly easy to put together, though I’m lucky that I have a partner who honestly loves to solder. We got three circuit board kits, since the more circuit boards on each wheel, the stronger the image will show at slower speeds. We got blue, which requires a bit more power than red, yellow, or green, but is so super awesome and bright. Plus it matches my handlebar tape and shoes.

We used a serial-to-USB converter to be able to upload images to the computers, each of which has enough space for four different images, allowing us to create animations or just to display a rotating set of images.

For my first go, I tried a portrait of Carl Sagan. Because we only have one color, I had to find a simple portrait that was easy to reduce to a single color. I found this one on Google Images, and turned it into this:

Carl Sagan Portrait

And because I could animate it, I decided to make Carl give a sassy little wink:

Carl Sagan winking

I uploaded the images and spun the wheel, and here’s what it looked like:

Carl Sagan on the bike

Okay, probably not a likeness that is immediately recognizable for most people on the street (or, maybe, for anyone) but I’m still pleased with it, and regardless of what image is showing, the LEDs are so bright that drivers should be able to spot me from a mile away on a dark and otherwise dangerous night.

I’ve already designed a new image of an old school atom that looks pretty amazing, though I don’t have a good photo of it on the bike just yet – I’ll probably Tweet it when I do have one. I’d love to get a pic of the bike in motion, but my photographer partner tells me that’s going to be very, very difficult, so in the meanwhile we have to go with the upside-down stationary bike. Also, in the future I may pick up some more kits to light up the front wheel as well, at which point I’d like to experiment with images that “jump” from one wheel to the other.

If you have SpokePOV or want to try it and would like a winking Sagan face on your bike, here are the .dat files, which you can edit as you please! Also, let me know what other dorky images you think would work on a bike wheel and maybe I’ll try them out.

Rebecca Watson

Rebecca is a writer, speaker, YouTube personality, and unrepentant science nerd. In addition to founding and continuing to run Skepchick, she hosts Quiz-o-Tron, a monthly science-themed quiz show and podcast that pits comedians against nerds. There is an asteroid named in her honor. Twitter @rebeccawatson Mastodon mstdn.social/@rebeccawatson Instagram @actuallyrebeccawatson TikTok @actuallyrebeccawatson YouTube @rebeccawatson BlueSky @rebeccawatson.bsky.social

Related Articles

5 Comments

  1. I just saw a YouTube video of a very similar product from company called MonkeyLectric. Pretty cool, Rebecca! This looks like a good post for MAL, too! It would be interesting to see what other images people come up with, like the FSM or a teapot.

  2. Nice.
    The next move is to add the green and red LED bars so you can get full colour images.

  3. Super dorky cool. I just got my first toys from Adafruit myself (a humidity/temperature sensor and an electronic water valve) for a less show-off-able project (a robotic garden waterer — less water usage! More data!).

    The neuroscience behind the “POV” effect is pretty cool — and one of those things that always ends up turning out to be somewhat more complicated than the basic theory would suggest. But the basic theory (as I understand it) is something like this. Photoreceptors on your retina respond to the arrival of light be producing electrical impulses. Neurons that sit on top of the retina, called retinal ganglion cells, are electrically coupled to the photoreceptor cells below via intermediate cells. Some of these couplings are excitatory, meaning that when the photoreceptor is active, the RGC neuron is more likely to produce its own electrical impulses, and some are inhibitory, meaning that when the photoreceptor is active, the RGC is less likely to produce electrical impulses. Individual RGCs have both types of connections to different photoreceptors, giving some of them a really interesting property called “center-surround receptive fields”. Basically, some RGC’s are wired to a donut-shaped piece of the retina’s photoreceptors (the “surround”) with inhibitory connections, and the donut-hole piece of the retina with excitatory connections (the “center”). This is called an “on-center” receptive field; some RGCs have the opposite “off-center” configuration. A better explanation with graphics is on Wikipedia: https://en.wikipedia.org/wiki/Receptive_field#Retinal_ganglion_cells . This has the really cool effect that for an RGC with a receptive field that’s fully in the dark, the center and the surround’s effects are balanced, and the cell is basically unaffected. But, for an RGC with a receptive field that’s fully in the light, the center and surround’s effects are again balanced, and the cell is again not much affected! Only when the receptive field is partially in the light, and partially in the dark, so that the center and surround’s effects are unequal, does the RGC change its activity pattern. This means that RGCs respond mainly to edges. Since RGCs are the main output from the retina to our brains, this means that in some sense, our brains only “see” edges — the nice white background on Skepchick produces very little effect on our brains. (Letting us focus more on content, I guess?)

    I promise this is getting back to the POV phenomenon. So the question is, with this edge-detection system, how do we tell if something is moving or not moving? It turns out that evolution has found a way to take advantage of the fact that the electrical impulses between neurons take some time to travel. By making the connection physically longer or shorter, or by changing the number of neurons involved, the electrical impulse will take more or less time to arrive. An intentionally long connection is called a “delay line”. So imagine two RGCs, which have receptive fields that are very close to each other on the retina; call them Alice and Bob. Both Alice and Bob send electrical impulses to a third neuron, Carol, but Alice’s connection includes a delay line, so her impulse takes more time to arrive. Now suppose there is an edge of light, moving across the retina. First it reaches Alice’s receptive field, and she begins to produce electrical impulses, which begin moving through the delay line to Carol. Then it moves out of Alice’s receptive field and into Bob’s, and he begins to produce electrical impulses, which reach Carol immediately. If the edge of light is moving at just the right speed, Carol will receive Alice and Bob’s electrical impulses at exactly the same time. Note that Alice and Bob can’t tell the difference between whether the edge of light moved through their receptive fields, or whether it simply turned on and then off again. But Carol knows that because Alice and Bob both sent impulses which arrived at the same time, there must be an edge of light moving across the retina. This is the basis for our ability to detect motion. (The actual circuit is slightly more complicated than that, see this Wikipedia section: https://en.wikipedia.org/wiki/Motion_sensing_in_vision#The_Reichardt-Hassenstein_Model .)

    So Rebecca’s awesome new bike wheel is fooling our visual systems in two ways. First, the wheel is spinning very fast, and our brains just don’t have delay lines that can discriminate motion that fast. So when one of the LEDs is on and sweeping out a curve on our retinas, we don’t register edges or motion, just a curve of light. But, when the LEDs switch on or off in exactly the same angular position every time, they’re creating effective “edges” on our retina, which always occur in exactly the same position, so there’s no apparent motion (“Bob” never activates, so “Carol” believes everything is static.) This kind of explanation isn’t really sufficient to explain lots of other visual illusions of motion (like the beta movement and phi phenomenon that Rebecca mentioned), because motion processing is even more complicated than this, and there’s still a lot that’s not known about it after decades of research.

    Sorry for the wall of text. Science is cool!

  4. Pretty cool. Since there are 4 different colors available, I wonder if you could put on three different color systems at the same time. That is, where you would normally put down one blue strip, instead put a red, blue, and green strip next to each other, all wired up to a different set of controllers for R, G, and B. Then, using a color picture you can do color separations in Photoshop to extract a red, green, and blue image mask. Then run all three pictures together at the same time. Then presto, you should have low-fi color images. The only sticky part is that all the controller sets need to by synchronized to the same clock so they all change at the exact same time. But that would be uber geek worthy :)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button