The first time an algorithm made me cry was in 2013. Google had somehow pieced together poignant photos of my Mom in her final weeks of life and timed them perfectly alongside videos of my son in the first weeks of his. It was paired with the right music, used natural transitions, and notified me at a time when I was receptive to watching it. Google did this all automatically, without any input from me.
Since that time, my buddy Google has followed up periodically with a living, breathing highlight reel of my life, including everything from spur-of-the-moment road trips to my daughter's first steps.
But how does it know? The more information I give to Google, the better it is at categorizing my photos and creating compilations that appeal to me. It uses information it has been programmed to recognize (like which lighting and angles are technically better than others), along with inputs from me (like who I tag in photos and which videos I share), to build a model that continually improves over time.
I don't think I take enough photos of llamas or flowers to properly train the model, but it's really good at telling apart people in my family.
The evolution of the technology and its capabilities is fascinating. Google gathers information from all of us collectively, and uses Machine Learning techniques to improve its systems all the time.
If you want your own lovingly crafted creations from Google, navigate to your Google Photos Settings Page and enable "Suggest new creations." Also, turn on back-ups on your mobile device directly from the Google Plus mobile application.
See more examples of Machine Learning in our Everyday Encounters blog series >>